2025-09-19 10:43:31.222089 | Job console starting 2025-09-19 10:43:31.234765 | Updating git repos 2025-09-19 10:43:31.332590 | Cloning repos into workspace 2025-09-19 10:43:31.609791 | Restoring repo states 2025-09-19 10:43:31.631952 | Merging changes 2025-09-19 10:43:32.195542 | Checking out repos 2025-09-19 10:43:32.506864 | Preparing playbooks 2025-09-19 10:43:33.168671 | Running Ansible setup 2025-09-19 10:43:37.441400 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-19 10:43:38.179898 | 2025-09-19 10:43:38.180049 | PLAY [Base pre] 2025-09-19 10:43:38.196755 | 2025-09-19 10:43:38.196882 | TASK [Setup log path fact] 2025-09-19 10:43:38.227005 | orchestrator | ok 2025-09-19 10:43:38.244571 | 2025-09-19 10:43:38.244721 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-19 10:43:38.286040 | orchestrator | ok 2025-09-19 10:43:38.298728 | 2025-09-19 10:43:38.298856 | TASK [emit-job-header : Print job information] 2025-09-19 10:43:38.343953 | # Job Information 2025-09-19 10:43:38.344214 | Ansible Version: 2.16.14 2025-09-19 10:43:38.344298 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-09-19 10:43:38.344358 | Pipeline: label 2025-09-19 10:43:38.344399 | Executor: 521e9411259a 2025-09-19 10:43:38.344436 | Triggered by: https://github.com/osism/testbed/pull/2766 2025-09-19 10:43:38.344476 | Event ID: 76be4b30-9545-11f0-9c2a-5ce2e4dce44b 2025-09-19 10:43:38.353852 | 2025-09-19 10:43:38.353978 | LOOP [emit-job-header : Print node information] 2025-09-19 10:43:38.479272 | orchestrator | ok: 2025-09-19 10:43:38.479571 | orchestrator | # Node Information 2025-09-19 10:43:38.479632 | orchestrator | Inventory Hostname: orchestrator 2025-09-19 10:43:38.479679 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-19 10:43:38.479719 | orchestrator | Username: zuul-testbed03 2025-09-19 10:43:38.479757 | orchestrator | Distro: Debian 12.12 2025-09-19 10:43:38.479800 | orchestrator | Provider: static-testbed 2025-09-19 10:43:38.479839 | orchestrator | Region: 2025-09-19 10:43:38.479876 | orchestrator | Label: testbed-orchestrator 2025-09-19 10:43:38.479912 | orchestrator | Product Name: OpenStack Nova 2025-09-19 10:43:38.479947 | orchestrator | Interface IP: 81.163.193.140 2025-09-19 10:43:38.507412 | 2025-09-19 10:43:38.507576 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-19 10:43:38.988507 | orchestrator -> localhost | changed 2025-09-19 10:43:38.996886 | 2025-09-19 10:43:38.997010 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-19 10:43:40.024924 | orchestrator -> localhost | changed 2025-09-19 10:43:40.040066 | 2025-09-19 10:43:40.040186 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-19 10:43:40.312635 | orchestrator -> localhost | ok 2025-09-19 10:43:40.320349 | 2025-09-19 10:43:40.320471 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-19 10:43:40.343107 | orchestrator | ok 2025-09-19 10:43:40.359492 | orchestrator | included: /var/lib/zuul/builds/c7a2390133a342ccabfde485aae75074/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-19 10:43:40.367635 | 2025-09-19 10:43:40.367748 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-19 10:43:41.148116 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-19 10:43:41.148727 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/c7a2390133a342ccabfde485aae75074/work/c7a2390133a342ccabfde485aae75074_id_rsa 2025-09-19 10:43:41.148837 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/c7a2390133a342ccabfde485aae75074/work/c7a2390133a342ccabfde485aae75074_id_rsa.pub 2025-09-19 10:43:41.148914 | orchestrator -> localhost | The key fingerprint is: 2025-09-19 10:43:41.148989 | orchestrator -> localhost | SHA256:AgwMiR2Z04T13Zlt/mEYKGhFCSXz/6Aa7AA/iUvUKm0 zuul-build-sshkey 2025-09-19 10:43:41.149054 | orchestrator -> localhost | The key's randomart image is: 2025-09-19 10:43:41.149139 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-19 10:43:41.149205 | orchestrator -> localhost | |o=oOo +++. | 2025-09-19 10:43:41.149305 | orchestrator -> localhost | |o Bo.. B.. = | 2025-09-19 10:43:41.149370 | orchestrator -> localhost | | .o + + = + | 2025-09-19 10:43:41.149429 | orchestrator -> localhost | | . o o o o | 2025-09-19 10:43:41.149488 | orchestrator -> localhost | | o . . S o o o | 2025-09-19 10:43:41.149552 | orchestrator -> localhost | | o = o . . o o . | 2025-09-19 10:43:41.149614 | orchestrator -> localhost | |. E = o . . . | 2025-09-19 10:43:41.149672 | orchestrator -> localhost | | + . + o | 2025-09-19 10:43:41.149733 | orchestrator -> localhost | | . o | 2025-09-19 10:43:41.149792 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-19 10:43:41.149928 | orchestrator -> localhost | ok: Runtime: 0:00:00.301035 2025-09-19 10:43:41.167215 | 2025-09-19 10:43:41.167486 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-19 10:43:41.206812 | orchestrator | ok 2025-09-19 10:43:41.220682 | orchestrator | included: /var/lib/zuul/builds/c7a2390133a342ccabfde485aae75074/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-19 10:43:41.230192 | 2025-09-19 10:43:41.230329 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-19 10:43:41.254195 | orchestrator | skipping: Conditional result was False 2025-09-19 10:43:41.269337 | 2025-09-19 10:43:41.269482 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-19 10:43:41.858041 | orchestrator | changed 2025-09-19 10:43:41.867092 | 2025-09-19 10:43:41.867230 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-19 10:43:42.158826 | orchestrator | ok 2025-09-19 10:43:42.168645 | 2025-09-19 10:43:42.168791 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-19 10:43:42.593944 | orchestrator | ok 2025-09-19 10:43:42.602541 | 2025-09-19 10:43:42.602677 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-19 10:43:43.015803 | orchestrator | ok 2025-09-19 10:43:43.024869 | 2025-09-19 10:43:43.025000 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-19 10:43:43.050303 | orchestrator | skipping: Conditional result was False 2025-09-19 10:43:43.061203 | 2025-09-19 10:43:43.061384 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-19 10:43:43.512707 | orchestrator -> localhost | changed 2025-09-19 10:43:43.529493 | 2025-09-19 10:43:43.529942 | TASK [add-build-sshkey : Add back temp key] 2025-09-19 10:43:43.870197 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/c7a2390133a342ccabfde485aae75074/work/c7a2390133a342ccabfde485aae75074_id_rsa (zuul-build-sshkey) 2025-09-19 10:43:43.870760 | orchestrator -> localhost | ok: Runtime: 0:00:00.019055 2025-09-19 10:43:43.888890 | 2025-09-19 10:43:43.889051 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-19 10:43:44.303515 | orchestrator | ok 2025-09-19 10:43:44.312402 | 2025-09-19 10:43:44.312535 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-19 10:43:44.337716 | orchestrator | skipping: Conditional result was False 2025-09-19 10:43:44.394604 | 2025-09-19 10:43:44.394724 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-19 10:43:44.805012 | orchestrator | ok 2025-09-19 10:43:44.819875 | 2025-09-19 10:43:44.820008 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-19 10:43:44.848735 | orchestrator | ok 2025-09-19 10:43:44.855955 | 2025-09-19 10:43:44.856056 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-19 10:43:45.154056 | orchestrator -> localhost | ok 2025-09-19 10:43:45.169888 | 2025-09-19 10:43:45.170046 | TASK [validate-host : Collect information about the host] 2025-09-19 10:43:46.392953 | orchestrator | ok 2025-09-19 10:43:46.423067 | 2025-09-19 10:43:46.423211 | TASK [validate-host : Sanitize hostname] 2025-09-19 10:43:46.487828 | orchestrator | ok 2025-09-19 10:43:46.497163 | 2025-09-19 10:43:46.497315 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-19 10:43:47.034169 | orchestrator -> localhost | changed 2025-09-19 10:43:47.044286 | 2025-09-19 10:43:47.044421 | TASK [validate-host : Collect information about zuul worker] 2025-09-19 10:43:47.466097 | orchestrator | ok 2025-09-19 10:43:47.474893 | 2025-09-19 10:43:47.475049 | TASK [validate-host : Write out all zuul information for each host] 2025-09-19 10:43:48.025983 | orchestrator -> localhost | changed 2025-09-19 10:43:48.036756 | 2025-09-19 10:43:48.036864 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-19 10:43:48.329213 | orchestrator | ok 2025-09-19 10:43:48.342695 | 2025-09-19 10:43:48.342895 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-19 10:44:23.361506 | orchestrator | changed: 2025-09-19 10:44:23.361742 | orchestrator | .d..t...... src/ 2025-09-19 10:44:23.361779 | orchestrator | .d..t...... src/github.com/ 2025-09-19 10:44:23.361805 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-19 10:44:23.361827 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-19 10:44:23.361848 | orchestrator | RedHat.yml 2025-09-19 10:44:23.374752 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-19 10:44:23.374770 | orchestrator | RedHat.yml 2025-09-19 10:44:23.374822 | orchestrator | = 2.2.0"... 2025-09-19 10:44:35.165553 | orchestrator | 10:44:35.165 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-09-19 10:44:35.195170 | orchestrator | 10:44:35.194 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-09-19 10:44:35.364762 | orchestrator | 10:44:35.364 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-19 10:44:35.822059 | orchestrator | 10:44:35.821 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-19 10:44:36.217783 | orchestrator | 10:44:36.217 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-19 10:44:37.047788 | orchestrator | 10:44:37.047 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-19 10:44:37.436352 | orchestrator | 10:44:37.436 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-19 10:44:38.050655 | orchestrator | 10:44:38.050 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-19 10:44:38.050731 | orchestrator | 10:44:38.050 STDOUT terraform: Providers are signed by their developers. 2025-09-19 10:44:38.050742 | orchestrator | 10:44:38.050 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-19 10:44:38.050751 | orchestrator | 10:44:38.050 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-19 10:44:38.050850 | orchestrator | 10:44:38.050 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-19 10:44:38.050923 | orchestrator | 10:44:38.050 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-19 10:44:38.050970 | orchestrator | 10:44:38.050 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-19 10:44:38.050995 | orchestrator | 10:44:38.050 STDOUT terraform: you run "tofu init" in the future. 2025-09-19 10:44:38.051507 | orchestrator | 10:44:38.051 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-19 10:44:38.051599 | orchestrator | 10:44:38.051 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-19 10:44:38.051661 | orchestrator | 10:44:38.051 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-19 10:44:38.051671 | orchestrator | 10:44:38.051 STDOUT terraform: should now work. 2025-09-19 10:44:38.051724 | orchestrator | 10:44:38.051 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-19 10:44:38.051773 | orchestrator | 10:44:38.051 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-19 10:44:38.051818 | orchestrator | 10:44:38.051 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-19 10:44:38.136121 | orchestrator | 10:44:38.135 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-19 10:44:38.136375 | orchestrator | 10:44:38.136 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-19 10:44:38.308293 | orchestrator | 10:44:38.308 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-19 10:44:38.308351 | orchestrator | 10:44:38.308 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-19 10:44:38.308360 | orchestrator | 10:44:38.308 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-19 10:44:38.308366 | orchestrator | 10:44:38.308 STDOUT terraform: for this configuration. 2025-09-19 10:44:38.439290 | orchestrator | 10:44:38.438 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-19 10:44:38.439353 | orchestrator | 10:44:38.438 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-19 10:44:38.530074 | orchestrator | 10:44:38.529 STDOUT terraform: ci.auto.tfvars 2025-09-19 10:44:38.535033 | orchestrator | 10:44:38.534 STDOUT terraform: default_custom.tf 2025-09-19 10:44:38.663589 | orchestrator | 10:44:38.663 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-19 10:44:39.526641 | orchestrator | 10:44:39.526 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-19 10:44:40.551997 | orchestrator | 10:44:40.551 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-19 10:44:40.790329 | orchestrator | 10:44:40.790 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-19 10:44:40.790427 | orchestrator | 10:44:40.790 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-19 10:44:40.790441 | orchestrator | 10:44:40.790 STDOUT terraform:  + create 2025-09-19 10:44:40.790472 | orchestrator | 10:44:40.790 STDOUT terraform:  <= read (data resources) 2025-09-19 10:44:40.790483 | orchestrator | 10:44:40.790 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-19 10:44:40.790494 | orchestrator | 10:44:40.790 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-19 10:44:40.790504 | orchestrator | 10:44:40.790 STDOUT terraform:  # (config refers to values not yet known) 2025-09-19 10:44:40.790519 | orchestrator | 10:44:40.790 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-19 10:44:40.790529 | orchestrator | 10:44:40.790 STDOUT terraform:  + checksum = (known after apply) 2025-09-19 10:44:40.790539 | orchestrator | 10:44:40.790 STDOUT terraform:  + created_at = (known after apply) 2025-09-19 10:44:40.790548 | orchestrator | 10:44:40.790 STDOUT terraform:  + file = (known after apply) 2025-09-19 10:44:40.790562 | orchestrator | 10:44:40.790 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.790575 | orchestrator | 10:44:40.790 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.790608 | orchestrator | 10:44:40.790 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-19 10:44:40.790621 | orchestrator | 10:44:40.790 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-19 10:44:40.790634 | orchestrator | 10:44:40.790 STDOUT terraform:  + most_recent = true 2025-09-19 10:44:40.790695 | orchestrator | 10:44:40.790 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:40.790708 | orchestrator | 10:44:40.790 STDOUT terraform:  + protected = (known after apply) 2025-09-19 10:44:40.790721 | orchestrator | 10:44:40.790 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.790734 | orchestrator | 10:44:40.790 STDOUT terraform:  + schema = (known after apply) 2025-09-19 10:44:40.790766 | orchestrator | 10:44:40.790 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-19 10:44:40.790779 | orchestrator | 10:44:40.790 STDOUT terraform:  + tags = (known after apply) 2025-09-19 10:44:40.790816 | orchestrator | 10:44:40.790 STDOUT terraform:  + updated_at = (known after apply) 2025-09-19 10:44:40.790839 | orchestrator | 10:44:40.790 STDOUT terraform:  } 2025-09-19 10:44:40.790866 | orchestrator | 10:44:40.790 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-19 10:44:40.790925 | orchestrator | 10:44:40.790 STDOUT terraform:  # (config refers to values not yet known) 2025-09-19 10:44:40.790945 | orchestrator | 10:44:40.790 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-19 10:44:40.790965 | orchestrator | 10:44:40.790 STDOUT terraform:  + checksum = (known after apply) 2025-09-19 10:44:40.790979 | orchestrator | 10:44:40.790 STDOUT terraform:  + created_at = (known after apply) 2025-09-19 10:44:40.790993 | orchestrator | 10:44:40.790 STDOUT terraform:  + file = (known after apply) 2025-09-19 10:44:40.791025 | orchestrator | 10:44:40.790 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.791039 | orchestrator | 10:44:40.790 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.791077 | orchestrator | 10:44:40.791 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-19 10:44:40.791091 | orchestrator | 10:44:40.791 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-19 10:44:40.791115 | orchestrator | 10:44:40.791 STDOUT terraform:  + most_recent = true 2025-09-19 10:44:40.791128 | orchestrator | 10:44:40.791 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:40.791159 | orchestrator | 10:44:40.791 STDOUT terraform:  + protected = (known after apply) 2025-09-19 10:44:40.791172 | orchestrator | 10:44:40.791 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.791209 | orchestrator | 10:44:40.791 STDOUT terraform:  + schema = (known after apply) 2025-09-19 10:44:40.791223 | orchestrator | 10:44:40.791 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-19 10:44:40.791257 | orchestrator | 10:44:40.791 STDOUT terraform:  + tags = (known after apply) 2025-09-19 10:44:40.791288 | orchestrator | 10:44:40.791 STDOUT terraform:  + updated_at = (known after apply) 2025-09-19 10:44:40.791298 | orchestrator | 10:44:40.791 STDOUT terraform:  } 2025-09-19 10:44:40.791311 | orchestrator | 10:44:40.791 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-19 10:44:40.791335 | orchestrator | 10:44:40.791 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-19 10:44:40.791375 | orchestrator | 10:44:40.791 STDOUT terraform:  + content = (known after apply) 2025-09-19 10:44:40.791405 | orchestrator | 10:44:40.791 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 10:44:40.791460 | orchestrator | 10:44:40.791 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 10:44:40.791475 | orchestrator | 10:44:40.791 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 10:44:40.791504 | orchestrator | 10:44:40.791 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 10:44:40.791559 | orchestrator | 10:44:40.791 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 10:44:40.791575 | orchestrator | 10:44:40.791 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 10:44:40.791587 | orchestrator | 10:44:40.791 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 10:44:40.791637 | orchestrator | 10:44:40.791 STDOUT terraform:  + file_permission = "0644" 2025-09-19 10:44:40.791652 | orchestrator | 10:44:40.791 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-19 10:44:40.791689 | orchestrator | 10:44:40.791 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.791701 | orchestrator | 10:44:40.791 STDOUT terraform:  } 2025-09-19 10:44:40.791714 | orchestrator | 10:44:40.791 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-19 10:44:40.791759 | orchestrator | 10:44:40.791 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-19 10:44:40.791773 | orchestrator | 10:44:40.791 STDOUT terraform:  + content = (known after apply) 2025-09-19 10:44:40.791811 | orchestrator | 10:44:40.791 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 10:44:40.791839 | orchestrator | 10:44:40.791 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 10:44:40.791905 | orchestrator | 10:44:40.791 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 10:44:40.791920 | orchestrator | 10:44:40.791 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 10:44:40.791934 | orchestrator | 10:44:40.791 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 10:44:40.791971 | orchestrator | 10:44:40.791 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 10:44:40.791986 | orchestrator | 10:44:40.791 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 10:44:40.791999 | orchestrator | 10:44:40.791 STDOUT terraform:  + file_permission = "0644" 2025-09-19 10:44:40.792071 | orchestrator | 10:44:40.791 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-19 10:44:40.792084 | orchestrator | 10:44:40.792 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.792097 | orchestrator | 10:44:40.792 STDOUT terraform:  } 2025-09-19 10:44:40.792139 | orchestrator | 10:44:40.792 STDOUT terraform:  # local_file.inventory will be created 2025-09-19 10:44:40.792154 | orchestrator | 10:44:40.792 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-19 10:44:40.792182 | orchestrator | 10:44:40.792 STDOUT terraform:  + content = (known after apply) 2025-09-19 10:44:40.792221 | orchestrator | 10:44:40.792 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 10:44:40.792257 | orchestrator | 10:44:40.792 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 10:44:40.792290 | orchestrator | 10:44:40.792 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 10:44:40.792354 | orchestrator | 10:44:40.792 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 10:44:40.792371 | orchestrator | 10:44:40.792 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 10:44:40.792384 | orchestrator | 10:44:40.792 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 10:44:40.792423 | orchestrator | 10:44:40.792 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 10:44:40.792438 | orchestrator | 10:44:40.792 STDOUT terraform:  + file_permission = "0644" 2025-09-19 10:44:40.792450 | orchestrator | 10:44:40.792 STDOUT terraform:  + filename = "inventory.ci" 2025-09-19 10:44:40.792513 | orchestrator | 10:44:40.792 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.792528 | orchestrator | 10:44:40.792 STDOUT terraform:  } 2025-09-19 10:44:40.792541 | orchestrator | 10:44:40.792 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-19 10:44:40.792551 | orchestrator | 10:44:40.792 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-19 10:44:40.792565 | orchestrator | 10:44:40.792 STDOUT terraform:  + content = (sensitive value) 2025-09-19 10:44:40.792610 | orchestrator | 10:44:40.792 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 10:44:40.792672 | orchestrator | 10:44:40.792 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 10:44:40.792689 | orchestrator | 10:44:40.792 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 10:44:40.792733 | orchestrator | 10:44:40.792 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 10:44:40.792795 | orchestrator | 10:44:40.792 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 10:44:40.792811 | orchestrator | 10:44:40.792 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 10:44:40.792824 | orchestrator | 10:44:40.792 STDOUT terraform:  + directory_permission = "0700" 2025-09-19 10:44:40.792865 | orchestrator | 10:44:40.792 STDOUT terraform:  + file_permission = "0600" 2025-09-19 10:44:40.792879 | orchestrator | 10:44:40.792 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-19 10:44:40.792943 | orchestrator | 10:44:40.792 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.792956 | orchestrator | 10:44:40.792 STDOUT terraform:  } 2025-09-19 10:44:40.792969 | orchestrator | 10:44:40.792 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-19 10:44:40.792982 | orchestrator | 10:44:40.792 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-19 10:44:40.792994 | orchestrator | 10:44:40.792 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.793025 | orchestrator | 10:44:40.792 STDOUT terraform:  } 2025-09-19 10:44:40.793068 | orchestrator | 10:44:40.792 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-19 10:44:40.793257 | orchestrator | 10:44:40.793 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-19 10:44:40.793276 | orchestrator | 10:44:40.793 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:40.793286 | orchestrator | 10:44:40.793 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.793296 | orchestrator | 10:44:40.793 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.793323 | orchestrator | 10:44:40.793 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:40.793336 | orchestrator | 10:44:40.793 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.793346 | orchestrator | 10:44:40.793 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-19 10:44:40.793355 | orchestrator | 10:44:40.793 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.793368 | orchestrator | 10:44:40.793 STDOUT terraform:  + size = 80 2025-09-19 10:44:40.793378 | orchestrator | 10:44:40.793 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:40.793390 | orchestrator | 10:44:40.793 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:40.793400 | orchestrator | 10:44:40.793 STDOUT terraform:  } 2025-09-19 10:44:40.793481 | orchestrator | 10:44:40.793 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-19 10:44:40.793519 | orchestrator | 10:44:40.793 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 10:44:40.793529 | orchestrator | 10:44:40.793 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:40.793541 | orchestrator | 10:44:40.793 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.793580 | orchestrator | 10:44:40.793 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.793594 | orchestrator | 10:44:40.793 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:40.793904 | orchestrator | 10:44:40.793 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.793921 | orchestrator | 10:44:40.793 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-19 10:44:40.793931 | orchestrator | 10:44:40.793 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.793960 | orchestrator | 10:44:40.793 STDOUT terraform:  + size = 80 2025-09-19 10:44:40.793970 | orchestrator | 10:44:40.793 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:40.793979 | orchestrator | 10:44:40.793 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:40.793989 | orchestrator | 10:44:40.793 STDOUT terraform:  } 2025-09-19 10:44:40.793999 | orchestrator | 10:44:40.793 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-19 10:44:40.794093 | orchestrator | 10:44:40.793 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 10:44:40.794105 | orchestrator | 10:44:40.793 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:40.794124 | orchestrator | 10:44:40.793 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.794139 | orchestrator | 10:44:40.793 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.794149 | orchestrator | 10:44:40.793 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:40.794159 | orchestrator | 10:44:40.793 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.794168 | orchestrator | 10:44:40.793 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-19 10:44:40.794178 | orchestrator | 10:44:40.794 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.802472 | orchestrator | 10:44:40.794 STDOUT terraform:  + size = 80 2025-09-19 10:44:40.802519 | orchestrator | 10:44:40.798 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:40.802526 | orchestrator | 10:44:40.798 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:40.802540 | orchestrator | 10:44:40.798 STDOUT terraform:  } 2025-09-19 10:44:40.802546 | orchestrator | 10:44:40.798 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-19 10:44:40.802553 | orchestrator | 10:44:40.798 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 10:44:40.802559 | orchestrator | 10:44:40.798 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:40.802572 | orchestrator | 10:44:40.798 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.802578 | orchestrator | 10:44:40.798 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.802583 | orchestrator | 10:44:40.798 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:40.802589 | orchestrator | 10:44:40.798 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.802594 | orchestrator | 10:44:40.798 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-19 10:44:40.802600 | orchestrator | 10:44:40.798 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.802605 | orchestrator | 10:44:40.798 STDOUT terraform:  + size = 80 2025-09-19 10:44:40.802610 | orchestrator | 10:44:40.798 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:40.802616 | orchestrator | 10:44:40.798 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:40.802621 | orchestrator | 10:44:40.798 STDOUT terraform:  } 2025-09-19 10:44:40.802627 | orchestrator | 10:44:40.798 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-19 10:44:40.802632 | orchestrator | 10:44:40.798 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 10:44:40.802638 | orchestrator | 10:44:40.798 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:40.802643 | orchestrator | 10:44:40.798 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.802648 | orchestrator | 10:44:40.798 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.802654 | orchestrator | 10:44:40.798 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:40.802659 | orchestrator | 10:44:40.798 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.802676 | orchestrator | 10:44:40.798 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-19 10:44:40.802682 | orchestrator | 10:44:40.798 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.802687 | orchestrator | 10:44:40.798 STDOUT terraform:  + size = 80 2025-09-19 10:44:40.802693 | orchestrator | 10:44:40.798 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:40.802698 | orchestrator | 10:44:40.798 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:40.802703 | orchestrator | 10:44:40.798 STDOUT terraform:  } 2025-09-19 10:44:40.802709 | orchestrator | 10:44:40.798 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-19 10:44:40.802717 | orchestrator | 10:44:40.798 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 10:44:40.802723 | orchestrator | 10:44:40.798 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:40.802728 | orchestrator | 10:44:40.798 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.802733 | orchestrator | 10:44:40.799 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.802739 | orchestrator | 10:44:40.799 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:40.802744 | orchestrator | 10:44:40.799 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.802759 | orchestrator | 10:44:40.799 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-19 10:44:40.802765 | orchestrator | 10:44:40.799 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.802770 | orchestrator | 10:44:40.799 STDOUT terraform:  + size = 80 2025-09-19 10:44:40.802775 | orchestrator | 10:44:40.799 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:40.802781 | orchestrator | 10:44:40.799 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:40.802786 | orchestrator | 10:44:40.799 STDOUT terraform:  } 2025-09-19 10:44:40.802791 | orchestrator | 10:44:40.799 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-19 10:44:40.802797 | orchestrator | 10:44:40.799 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 10:44:40.802802 | orchestrator | 10:44:40.799 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:40.802808 | orchestrator | 10:44:40.799 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.802813 | orchestrator | 10:44:40.799 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.802818 | orchestrator | 10:44:40.799 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:40.802824 | orchestrator | 10:44:40.799 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.802829 | orchestrator | 10:44:40.799 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-19 10:44:40.802835 | orchestrator | 10:44:40.799 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.802840 | orchestrator | 10:44:40.799 STDOUT terraform:  + size = 80 2025-09-19 10:44:40.802849 | orchestrator | 10:44:40.799 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:40.802855 | orchestrator | 10:44:40.799 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:40.802860 | orchestrator | 10:44:40.799 STDOUT terraform:  } 2025-09-19 10:44:40.802866 | orchestrator | 10:44:40.799 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-19 10:44:40.802871 | orchestrator | 10:44:40.799 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:40.802885 | orchestrator | 10:44:40.799 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:40.802890 | orchestrator | 10:44:40.799 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.802896 | orchestrator | 10:44:40.799 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.802901 | orchestrator | 10:44:40.799 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.802906 | orchestrator | 10:44:40.799 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-19 10:44:40.802912 | orchestrator | 10:44:40.799 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.802917 | orchestrator | 10:44:40.799 STDOUT terraform:  + size = 20 2025-09-19 10:44:40.802923 | orchestrator | 10:44:40.799 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:40.802928 | orchestrator | 10:44:40.799 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:40.802933 | orchestrator | 10:44:40.799 STDOUT terraform:  } 2025-09-19 10:44:40.802939 | orchestrator | 10:44:40.799 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-19 10:44:40.802944 | orchestrator | 10:44:40.800 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:40.802950 | orchestrator | 10:44:40.800 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:40.802955 | orchestrator | 10:44:40.800 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.802960 | orchestrator | 10:44:40.800 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.802966 | orchestrator | 10:44:40.800 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.802975 | orchestrator | 10:44:40.800 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-19 10:44:40.802981 | orchestrator | 10:44:40.800 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.802989 | orchestrator | 10:44:40.800 STDOUT terraform:  + size = 20 2025-09-19 10:44:40.802995 | orchestrator | 10:44:40.800 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:40.803000 | orchestrator | 10:44:40.800 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:40.803017 | orchestrator | 10:44:40.800 STDOUT terraform:  } 2025-09-19 10:44:40.803023 | orchestrator | 10:44:40.800 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-19 10:44:40.803029 | orchestrator | 10:44:40.800 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:40.803034 | orchestrator | 10:44:40.800 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:40.803043 | orchestrator | 10:44:40.800 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.803049 | orchestrator | 10:44:40.800 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.803054 | orchestrator | 10:44:40.800 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.803060 | orchestrator | 10:44:40.800 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-19 10:44:40.803065 | orchestrator | 10:44:40.800 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.803070 | orchestrator | 10:44:40.800 STDOUT terraform:  + size = 20 2025-09-19 10:44:40.803079 | orchestrator | 10:44:40.800 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:40.803084 | orchestrator | 10:44:40.800 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:40.803090 | orchestrator | 10:44:40.800 STDOUT terraform:  } 2025-09-19 10:44:40.803095 | orchestrator | 10:44:40.800 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-19 10:44:40.803101 | orchestrator | 10:44:40.800 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:40.803106 | orchestrator | 10:44:40.800 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:40.803111 | orchestrator | 10:44:40.800 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.803117 | orchestrator | 10:44:40.800 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.803122 | orchestrator | 10:44:40.800 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.803128 | orchestrator | 10:44:40.800 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-19 10:44:40.803133 | orchestrator | 10:44:40.800 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.803138 | orchestrator | 10:44:40.800 STDOUT terraform:  + size = 20 2025-09-19 10:44:40.803144 | orchestrator | 10:44:40.800 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:40.803149 | orchestrator | 10:44:40.800 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:40.803154 | orchestrator | 10:44:40.800 STDOUT terraform:  } 2025-09-19 10:44:40.803160 | orchestrator | 10:44:40.800 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-19 10:44:40.803165 | orchestrator | 10:44:40.800 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:40.803171 | orchestrator | 10:44:40.801 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:40.803176 | orchestrator | 10:44:40.801 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.803182 | orchestrator | 10:44:40.801 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.803187 | orchestrator | 10:44:40.801 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.803193 | orchestrator | 10:44:40.801 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-19 10:44:40.803205 | orchestrator | 10:44:40.801 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.803214 | orchestrator | 10:44:40.801 STDOUT terraform:  + size = 20 2025-09-19 10:44:40.803220 | orchestrator | 10:44:40.801 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:40.803225 | orchestrator | 10:44:40.801 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:40.803231 | orchestrator | 10:44:40.801 STDOUT terraform:  } 2025-09-19 10:44:40.803236 | orchestrator | 10:44:40.801 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-19 10:44:40.803241 | orchestrator | 10:44:40.801 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:40.803247 | orchestrator | 10:44:40.801 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:40.803252 | orchestrator | 10:44:40.801 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.803257 | orchestrator | 10:44:40.801 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.803263 | orchestrator | 10:44:40.801 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.803268 | orchestrator | 10:44:40.801 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-19 10:44:40.803273 | orchestrator | 10:44:40.801 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.803282 | orchestrator | 10:44:40.801 STDOUT terraform:  + size = 20 2025-09-19 10:44:40.803287 | orchestrator | 10:44:40.801 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:40.803293 | orchestrator | 10:44:40.801 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:40.803298 | orchestrator | 10:44:40.801 STDOUT terraform:  } 2025-09-19 10:44:40.803303 | orchestrator | 10:44:40.801 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-19 10:44:40.803309 | orchestrator | 10:44:40.802 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:40.803314 | orchestrator | 10:44:40.802 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:40.803319 | orchestrator | 10:44:40.802 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.803325 | orchestrator | 10:44:40.803 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.803330 | orchestrator | 10:44:40.803 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.803335 | orchestrator | 10:44:40.803 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-19 10:44:40.803341 | orchestrator | 10:44:40.803 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.803346 | orchestrator | 10:44:40.803 STDOUT terraform:  + size = 20 2025-09-19 10:44:40.803354 | orchestrator | 10:44:40.803 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:40.803359 | orchestrator | 10:44:40.803 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:40.803365 | orchestrator | 10:44:40.803 STDOUT terraform:  } 2025-09-19 10:44:40.803370 | orchestrator | 10:44:40.803 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-19 10:44:40.803375 | orchestrator | 10:44:40.803 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:40.803384 | orchestrator | 10:44:40.803 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:40.803392 | orchestrator | 10:44:40.803 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.803398 | orchestrator | 10:44:40.803 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.803433 | orchestrator | 10:44:40.803 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.803471 | orchestrator | 10:44:40.803 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-19 10:44:40.803623 | orchestrator | 10:44:40.803 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.803637 | orchestrator | 10:44:40.803 STDOUT terraform:  + size = 20 2025-09-19 10:44:40.803642 | orchestrator | 10:44:40.803 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:40.803648 | orchestrator | 10:44:40.803 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:40.803653 | orchestrator | 10:44:40.803 STDOUT terraform:  } 2025-09-19 10:44:40.803661 | orchestrator | 10:44:40.803 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-19 10:44:40.803667 | orchestrator | 10:44:40.803 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:40.803760 | orchestrator | 10:44:40.803 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:40.803768 | orchestrator | 10:44:40.803 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.803773 | orchestrator | 10:44:40.803 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.803781 | orchestrator | 10:44:40.803 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:40.806469 | orchestrator | 10:44:40.803 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-19 10:44:40.806490 | orchestrator | 10:44:40.803 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.806495 | orchestrator | 10:44:40.803 STDOUT terraform:  + size = 20 2025-09-19 10:44:40.806501 | orchestrator | 10:44:40.803 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:40.806506 | orchestrator | 10:44:40.803 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:40.806524 | orchestrator | 10:44:40.803 STDOUT terraform:  } 2025-09-19 10:44:40.806536 | orchestrator | 10:44:40.803 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-19 10:44:40.806542 | orchestrator | 10:44:40.803 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-19 10:44:40.806546 | orchestrator | 10:44:40.803 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 10:44:40.806551 | orchestrator | 10:44:40.804 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 10:44:40.806556 | orchestrator | 10:44:40.804 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 10:44:40.806561 | orchestrator | 10:44:40.804 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.806565 | orchestrator | 10:44:40.804 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.806576 | orchestrator | 10:44:40.804 STDOUT terraform:  + config_drive = true 2025-09-19 10:44:40.806581 | orchestrator | 10:44:40.804 STDOUT terraform:  + created = (known after apply) 2025-09-19 10:44:40.806586 | orchestrator | 10:44:40.804 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 10:44:40.806591 | orchestrator | 10:44:40.804 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-19 10:44:40.806595 | orchestrator | 10:44:40.804 STDOUT terraform:  + force_delete = false 2025-09-19 10:44:40.806600 | orchestrator | 10:44:40.804 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 10:44:40.806605 | orchestrator | 10:44:40.804 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.806610 | orchestrator | 10:44:40.804 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:40.806614 | orchestrator | 10:44:40.804 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 10:44:40.806619 | orchestrator | 10:44:40.804 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 10:44:40.806624 | orchestrator | 10:44:40.804 STDOUT terraform:  + name = "testbed-manager" 2025-09-19 10:44:40.806629 | orchestrator | 10:44:40.804 STDOUT terraform:  + power_state = "active" 2025-09-19 10:44:40.806633 | orchestrator | 10:44:40.804 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.806638 | orchestrator | 10:44:40.804 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 10:44:40.806643 | orchestrator | 10:44:40.804 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 10:44:40.806647 | orchestrator | 10:44:40.804 STDOUT terraform:  + updated = (known after apply) 2025-09-19 10:44:40.806652 | orchestrator | 10:44:40.804 STDOUT terraform:  + user_data = (sensitive value) 2025-09-19 10:44:40.806657 | orchestrator | 10:44:40.804 STDOUT terraform:  + block_device { 2025-09-19 10:44:40.806662 | orchestrator | 10:44:40.804 STDOUT terraform:  + boot_index = 0 2025-09-19 10:44:40.806667 | orchestrator | 10:44:40.804 STDOUT terraform:  + delete_on_termination = false 2025-09-19 10:44:40.806671 | orchestrator | 10:44:40.804 STDOUT terraform:  + destination_type = "volume" 2025-09-19 10:44:40.806676 | orchestrator | 10:44:40.804 STDOUT terraform:  + multiattach = false 2025-09-19 10:44:40.806681 | orchestrator | 10:44:40.804 STDOUT terraform:  + source_type = "volume" 2025-09-19 10:44:40.806686 | orchestrator | 10:44:40.804 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:40.806696 | orchestrator | 10:44:40.804 STDOUT terraform:  } 2025-09-19 10:44:40.806701 | orchestrator | 10:44:40.804 STDOUT terraform:  + network { 2025-09-19 10:44:40.806706 | orchestrator | 10:44:40.804 STDOUT terraform:  + access_network = false 2025-09-19 10:44:40.806711 | orchestrator | 10:44:40.804 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 10:44:40.806716 | orchestrator | 10:44:40.804 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 10:44:40.806721 | orchestrator | 10:44:40.804 STDOUT terraform:  + mac = (known after apply) 2025-09-19 10:44:40.806729 | orchestrator | 10:44:40.804 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:40.806734 | orchestrator | 10:44:40.804 STDOUT terraform:  + port = (known after apply) 2025-09-19 10:44:40.806738 | orchestrator | 10:44:40.804 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:40.806743 | orchestrator | 10:44:40.804 STDOUT terraform:  } 2025-09-19 10:44:40.806748 | orchestrator | 10:44:40.804 STDOUT terraform:  } 2025-09-19 10:44:40.806753 | orchestrator | 10:44:40.804 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-19 10:44:40.806758 | orchestrator | 10:44:40.804 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 10:44:40.806762 | orchestrator | 10:44:40.804 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 10:44:40.806770 | orchestrator | 10:44:40.804 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 10:44:40.806775 | orchestrator | 10:44:40.805 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 10:44:40.806780 | orchestrator | 10:44:40.805 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.806784 | orchestrator | 10:44:40.805 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.806789 | orchestrator | 10:44:40.805 STDOUT terraform:  + config_drive = true 2025-09-19 10:44:40.806794 | orchestrator | 10:44:40.805 STDOUT terraform:  + created = (known after apply) 2025-09-19 10:44:40.806799 | orchestrator | 10:44:40.805 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 10:44:40.806803 | orchestrator | 10:44:40.805 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 10:44:40.806808 | orchestrator | 10:44:40.805 STDOUT terraform:  + force_delete = false 2025-09-19 10:44:40.806813 | orchestrator | 10:44:40.805 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 10:44:40.806818 | orchestrator | 10:44:40.805 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.806822 | orchestrator | 10:44:40.805 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:40.806827 | orchestrator | 10:44:40.805 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 10:44:40.806832 | orchestrator | 10:44:40.805 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 10:44:40.806836 | orchestrator | 10:44:40.805 STDOUT terraform:  + name = "testbed-node-0" 2025-09-19 10:44:40.806841 | orchestrator | 10:44:40.805 STDOUT terraform:  + power_state = "active" 2025-09-19 10:44:40.806846 | orchestrator | 10:44:40.805 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.806851 | orchestrator | 10:44:40.805 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 10:44:40.806855 | orchestrator | 10:44:40.805 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 10:44:40.806860 | orchestrator | 10:44:40.805 STDOUT terraform:  + updated = (known after apply) 2025-09-19 10:44:40.806865 | orchestrator | 10:44:40.805 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 10:44:40.806870 | orchestrator | 10:44:40.805 STDOUT terraform:  + block_device { 2025-09-19 10:44:40.806881 | orchestrator | 10:44:40.805 STDOUT terraform:  + boot_index = 0 2025-09-19 10:44:40.806886 | orchestrator | 10:44:40.805 STDOUT terraform:  + delete_on_termination = false 2025-09-19 10:44:40.806896 | orchestrator | 10:44:40.805 STDOUT terraform:  + destination_type = "volume" 2025-09-19 10:44:40.806901 | orchestrator | 10:44:40.805 STDOUT terraform:  + multiattach = false 2025-09-19 10:44:40.806906 | orchestrator | 10:44:40.805 STDOUT terraform:  + source_type = "volume" 2025-09-19 10:44:40.806911 | orchestrator | 10:44:40.805 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:40.806916 | orchestrator | 10:44:40.805 STDOUT terraform:  } 2025-09-19 10:44:40.806920 | orchestrator | 10:44:40.805 STDOUT terraform:  + network { 2025-09-19 10:44:40.806925 | orchestrator | 10:44:40.805 STDOUT terraform:  + access_network = false 2025-09-19 10:44:40.806930 | orchestrator | 10:44:40.805 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 10:44:40.806935 | orchestrator | 10:44:40.805 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 10:44:40.806939 | orchestrator | 10:44:40.805 STDOUT terraform:  + mac = (known after apply) 2025-09-19 10:44:40.806944 | orchestrator | 10:44:40.805 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:40.806949 | orchestrator | 10:44:40.805 STDOUT terraform:  + port = (known after apply) 2025-09-19 10:44:40.806954 | orchestrator | 10:44:40.805 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:40.806958 | orchestrator | 10:44:40.805 STDOUT terraform:  } 2025-09-19 10:44:40.806963 | orchestrator | 10:44:40.805 STDOUT terraform:  } 2025-09-19 10:44:40.806968 | orchestrator | 10:44:40.805 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-19 10:44:40.806973 | orchestrator | 10:44:40.805 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 10:44:40.806977 | orchestrator | 10:44:40.805 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 10:44:40.806982 | orchestrator | 10:44:40.806 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 10:44:40.806987 | orchestrator | 10:44:40.806 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 10:44:40.806992 | orchestrator | 10:44:40.806 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.806996 | orchestrator | 10:44:40.806 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.807001 | orchestrator | 10:44:40.806 STDOUT terraform:  + config_drive = true 2025-09-19 10:44:40.807017 | orchestrator | 10:44:40.806 STDOUT terraform:  + created = (known after apply) 2025-09-19 10:44:40.807022 | orchestrator | 10:44:40.806 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 10:44:40.807027 | orchestrator | 10:44:40.806 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 10:44:40.807031 | orchestrator | 10:44:40.806 STDOUT terraform:  + force_delete = false 2025-09-19 10:44:40.807039 | orchestrator | 10:44:40.806 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 10:44:40.807048 | orchestrator | 10:44:40.806 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.807052 | orchestrator | 10:44:40.806 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:40.807057 | orchestrator | 10:44:40.806 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 10:44:40.807062 | orchestrator | 10:44:40.806 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 10:44:40.807067 | orchestrator | 10:44:40.806 STDOUT terraform:  + name = "testbed-node-1" 2025-09-19 10:44:40.807071 | orchestrator | 10:44:40.806 STDOUT terraform:  + power_state = "active" 2025-09-19 10:44:40.807078 | orchestrator | 10:44:40.806 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.807083 | orchestrator | 10:44:40.806 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 10:44:40.807088 | orchestrator | 10:44:40.806 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 10:44:40.807093 | orchestrator | 10:44:40.806 STDOUT terraform:  + updated = (known after apply) 2025-09-19 10:44:40.807097 | orchestrator | 10:44:40.806 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 10:44:40.807104 | orchestrator | 10:44:40.807 STDOUT terraform:  + block_device { 2025-09-19 10:44:40.807109 | orchestrator | 10:44:40.807 STDOUT terraform:  + boot_index = 0 2025-09-19 10:44:40.807134 | orchestrator | 10:44:40.807 STDOUT terraform:  + delete_on_termination = false 2025-09-19 10:44:40.807163 | orchestrator | 10:44:40.807 STDOUT terraform:  + destination_type = "volume" 2025-09-19 10:44:40.807198 | orchestrator | 10:44:40.807 STDOUT terraform:  + multiattach = false 2025-09-19 10:44:40.807221 | orchestrator | 10:44:40.807 STDOUT terraform:  + source_type = "volume" 2025-09-19 10:44:40.807254 | orchestrator | 10:44:40.807 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:40.807262 | orchestrator | 10:44:40.807 STDOUT terraform:  } 2025-09-19 10:44:40.807269 | orchestrator | 10:44:40.807 STDOUT terraform:  + network { 2025-09-19 10:44:40.807298 | orchestrator | 10:44:40.807 STDOUT terraform:  + access_network = false 2025-09-19 10:44:40.807326 | orchestrator | 10:44:40.807 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 10:44:40.807353 | orchestrator | 10:44:40.807 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 10:44:40.807395 | orchestrator | 10:44:40.807 STDOUT terraform:  + mac = (known after apply) 2025-09-19 10:44:40.807417 | orchestrator | 10:44:40.807 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:40.807446 | orchestrator | 10:44:40.807 STDOUT terraform:  + port = (known after apply) 2025-09-19 10:44:40.807488 | orchestrator | 10:44:40.807 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:40.807494 | orchestrator | 10:44:40.807 STDOUT terraform:  } 2025-09-19 10:44:40.807501 | orchestrator | 10:44:40.807 STDOUT terraform:  } 2025-09-19 10:44:40.807536 | orchestrator | 10:44:40.807 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-19 10:44:40.807576 | orchestrator | 10:44:40.807 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 10:44:40.807612 | orchestrator | 10:44:40.807 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 10:44:40.807645 | orchestrator | 10:44:40.807 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 10:44:40.807686 | orchestrator | 10:44:40.807 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 10:44:40.807715 | orchestrator | 10:44:40.807 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.807738 | orchestrator | 10:44:40.807 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.807757 | orchestrator | 10:44:40.807 STDOUT terraform:  + config_drive = true 2025-09-19 10:44:40.807802 | orchestrator | 10:44:40.807 STDOUT terraform:  + created = (known after apply) 2025-09-19 10:44:40.807825 | orchestrator | 10:44:40.807 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 10:44:40.807853 | orchestrator | 10:44:40.807 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 10:44:40.807876 | orchestrator | 10:44:40.807 STDOUT terraform:  + force_delete = false 2025-09-19 10:44:40.807909 | orchestrator | 10:44:40.807 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 10:44:40.807955 | orchestrator | 10:44:40.807 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.807978 | orchestrator | 10:44:40.807 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:40.808020 | orchestrator | 10:44:40.807 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 10:44:40.808045 | orchestrator | 10:44:40.808 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 10:44:40.808074 | orchestrator | 10:44:40.808 STDOUT terraform:  + name = "testbed-node-2" 2025-09-19 10:44:40.808097 | orchestrator | 10:44:40.808 STDOUT terraform:  + power_state = "active" 2025-09-19 10:44:40.808131 | orchestrator | 10:44:40.808 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.808164 | orchestrator | 10:44:40.808 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 10:44:40.808193 | orchestrator | 10:44:40.808 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 10:44:40.808221 | orchestrator | 10:44:40.808 STDOUT terraform:  + updated = (known after apply) 2025-09-19 10:44:40.808267 | orchestrator | 10:44:40.808 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 10:44:40.808275 | orchestrator | 10:44:40.808 STDOUT terraform:  + block_device { 2025-09-19 10:44:40.808303 | orchestrator | 10:44:40.808 STDOUT terraform:  + boot_index = 0 2025-09-19 10:44:40.808329 | orchestrator | 10:44:40.808 STDOUT terraform:  + delete_on_termination = false 2025-09-19 10:44:40.808360 | orchestrator | 10:44:40.808 STDOUT terraform:  + destination_type = "volume" 2025-09-19 10:44:40.808385 | orchestrator | 10:44:40.808 STDOUT terraform:  + multiattach = false 2025-09-19 10:44:40.808413 | orchestrator | 10:44:40.808 STDOUT terraform:  + source_type = "volume" 2025-09-19 10:44:40.808450 | orchestrator | 10:44:40.808 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:40.808462 | orchestrator | 10:44:40.808 STDOUT terraform:  } 2025-09-19 10:44:40.808469 | orchestrator | 10:44:40.808 STDOUT terraform:  + network { 2025-09-19 10:44:40.808491 | orchestrator | 10:44:40.808 STDOUT terraform:  + access_network = false 2025-09-19 10:44:40.808519 | orchestrator | 10:44:40.808 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 10:44:40.808550 | orchestrator | 10:44:40.808 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 10:44:40.808590 | orchestrator | 10:44:40.808 STDOUT terraform:  + mac = (known after apply) 2025-09-19 10:44:40.808612 | orchestrator | 10:44:40.808 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:40.808640 | orchestrator | 10:44:40.808 STDOUT terraform:  + port = (known after apply) 2025-09-19 10:44:40.808669 | orchestrator | 10:44:40.808 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:40.808677 | orchestrator | 10:44:40.808 STDOUT terraform:  } 2025-09-19 10:44:40.808685 | orchestrator | 10:44:40.808 STDOUT terraform:  } 2025-09-19 10:44:40.808728 | orchestrator | 10:44:40.808 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-19 10:44:40.808771 | orchestrator | 10:44:40.808 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 10:44:40.808805 | orchestrator | 10:44:40.808 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 10:44:40.808838 | orchestrator | 10:44:40.808 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 10:44:40.808871 | orchestrator | 10:44:40.808 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 10:44:40.808906 | orchestrator | 10:44:40.808 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.808929 | orchestrator | 10:44:40.808 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.808948 | orchestrator | 10:44:40.808 STDOUT terraform:  + config_drive = true 2025-09-19 10:44:40.808982 | orchestrator | 10:44:40.808 STDOUT terraform:  + created = (known after apply) 2025-09-19 10:44:40.809041 | orchestrator | 10:44:40.808 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 10:44:40.809050 | orchestrator | 10:44:40.809 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 10:44:40.809075 | orchestrator | 10:44:40.809 STDOUT terraform:  + force_delete = false 2025-09-19 10:44:40.809106 | orchestrator | 10:44:40.809 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 10:44:40.809140 | orchestrator | 10:44:40.809 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.809174 | orchestrator | 10:44:40.809 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:40.809207 | orchestrator | 10:44:40.809 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 10:44:40.809231 | orchestrator | 10:44:40.809 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 10:44:40.809261 | orchestrator | 10:44:40.809 STDOUT terraform:  + name = "testbed-node-3" 2025-09-19 10:44:40.809283 | orchestrator | 10:44:40.809 STDOUT terraform:  + power_state = "active" 2025-09-19 10:44:40.809329 | orchestrator | 10:44:40.809 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.809350 | orchestrator | 10:44:40.809 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 10:44:40.809369 | orchestrator | 10:44:40.809 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 10:44:40.809402 | orchestrator | 10:44:40.809 STDOUT terraform:  + updated = (known after apply) 2025-09-19 10:44:40.809451 | orchestrator | 10:44:40.809 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 10:44:40.809458 | orchestrator | 10:44:40.809 STDOUT terraform:  + block_device { 2025-09-19 10:44:40.809486 | orchestrator | 10:44:40.809 STDOUT terraform:  + boot_index = 0 2025-09-19 10:44:40.809513 | orchestrator | 10:44:40.809 STDOUT terraform:  + delete_on_termination = false 2025-09-19 10:44:40.809541 | orchestrator | 10:44:40.809 STDOUT terraform:  + destination_type = "volume" 2025-09-19 10:44:40.809568 | orchestrator | 10:44:40.809 STDOUT terraform:  + multiattach = false 2025-09-19 10:44:40.809595 | orchestrator | 10:44:40.809 STDOUT terraform:  + source_type = "volume" 2025-09-19 10:44:40.809631 | orchestrator | 10:44:40.809 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:40.809639 | orchestrator | 10:44:40.809 STDOUT terraform:  } 2025-09-19 10:44:40.809645 | orchestrator | 10:44:40.809 STDOUT terraform:  + network { 2025-09-19 10:44:40.809674 | orchestrator | 10:44:40.809 STDOUT terraform:  + access_network = false 2025-09-19 10:44:40.809697 | orchestrator | 10:44:40.809 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 10:44:40.809726 | orchestrator | 10:44:40.809 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 10:44:40.809756 | orchestrator | 10:44:40.809 STDOUT terraform:  + mac = (known after apply) 2025-09-19 10:44:40.809786 | orchestrator | 10:44:40.809 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:40.809816 | orchestrator | 10:44:40.809 STDOUT terraform:  + port = (known after apply) 2025-09-19 10:44:40.809846 | orchestrator | 10:44:40.809 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:40.809853 | orchestrator | 10:44:40.809 STDOUT terraform:  } 2025-09-19 10:44:40.809859 | orchestrator | 10:44:40.809 STDOUT terraform:  } 2025-09-19 10:44:40.809909 | orchestrator | 10:44:40.809 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-19 10:44:40.809950 | orchestrator | 10:44:40.809 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 10:44:40.809983 | orchestrator | 10:44:40.809 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 10:44:40.810035 | orchestrator | 10:44:40.809 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 10:44:40.812204 | orchestrator | 10:44:40.810 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 10:44:40.812253 | orchestrator | 10:44:40.810 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.812260 | orchestrator | 10:44:40.810 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.812275 | orchestrator | 10:44:40.810 STDOUT terraform:  + config_drive = true 2025-09-19 10:44:40.812286 | orchestrator | 10:44:40.810 STDOUT terraform:  + created = (known after apply) 2025-09-19 10:44:40.812290 | orchestrator | 10:44:40.810 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 10:44:40.812294 | orchestrator | 10:44:40.810 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 10:44:40.812299 | orchestrator | 10:44:40.810 STDOUT terraform:  + force_delete = false 2025-09-19 10:44:40.812303 | orchestrator | 10:44:40.810 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 10:44:40.812307 | orchestrator | 10:44:40.810 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.812310 | orchestrator | 10:44:40.810 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:40.812317 | orchestrator | 10:44:40.810 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 10:44:40.812321 | orchestrator | 10:44:40.810 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 10:44:40.812325 | orchestrator | 10:44:40.810 STDOUT terraform:  + name = "testbed-node-4" 2025-09-19 10:44:40.812329 | orchestrator | 10:44:40.810 STDOUT terraform:  + power_state = "active" 2025-09-19 10:44:40.812333 | orchestrator | 10:44:40.810 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.812337 | orchestrator | 10:44:40.810 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 10:44:40.812340 | orchestrator | 10:44:40.810 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 10:44:40.812344 | orchestrator | 10:44:40.810 STDOUT terraform:  + updated = (known after apply) 2025-09-19 10:44:40.812348 | orchestrator | 10:44:40.810 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 10:44:40.812353 | orchestrator | 10:44:40.810 STDOUT terraform:  + block_device { 2025-09-19 10:44:40.812357 | orchestrator | 10:44:40.810 STDOUT terraform:  + boot_index = 0 2025-09-19 10:44:40.812361 | orchestrator | 10:44:40.810 STDOUT terraform:  + delete_on_termination = false 2025-09-19 10:44:40.812364 | orchestrator | 10:44:40.810 STDOUT terraform:  + destination_type = "volume" 2025-09-19 10:44:40.812368 | orchestrator | 10:44:40.810 STDOUT terraform:  + multiattach = false 2025-09-19 10:44:40.812372 | orchestrator | 10:44:40.810 STDOUT terraform:  + source_type = "volume" 2025-09-19 10:44:40.812376 | orchestrator | 10:44:40.810 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:40.812381 | orchestrator | 10:44:40.810 STDOUT terraform:  } 2025-09-19 10:44:40.812385 | orchestrator | 10:44:40.810 STDOUT terraform:  + network { 2025-09-19 10:44:40.812389 | orchestrator | 10:44:40.810 STDOUT terraform:  + access_network = false 2025-09-19 10:44:40.812393 | orchestrator | 10:44:40.810 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 10:44:40.812397 | orchestrator | 10:44:40.810 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 10:44:40.812400 | orchestrator | 10:44:40.810 STDOUT terraform:  + mac = (known after apply) 2025-09-19 10:44:40.812408 | orchestrator | 10:44:40.810 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:40.812411 | orchestrator | 10:44:40.810 STDOUT terraform:  + port = (known after apply) 2025-09-19 10:44:40.812415 | orchestrator | 10:44:40.810 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:40.812419 | orchestrator | 10:44:40.810 STDOUT terraform:  } 2025-09-19 10:44:40.812424 | orchestrator | 10:44:40.810 STDOUT terraform:  } 2025-09-19 10:44:40.812437 | orchestrator | 10:44:40.810 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-19 10:44:40.812442 | orchestrator | 10:44:40.810 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 10:44:40.812446 | orchestrator | 10:44:40.811 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 10:44:40.812449 | orchestrator | 10:44:40.811 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 10:44:40.812453 | orchestrator | 10:44:40.811 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 10:44:40.812457 | orchestrator | 10:44:40.811 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.812461 | orchestrator | 10:44:40.811 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:40.812465 | orchestrator | 10:44:40.811 STDOUT terraform:  + config_drive = true 2025-09-19 10:44:40.812469 | orchestrator | 10:44:40.811 STDOUT terraform:  + created = (known after apply) 2025-09-19 10:44:40.812473 | orchestrator | 10:44:40.811 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 10:44:40.812476 | orchestrator | 10:44:40.811 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 10:44:40.812480 | orchestrator | 10:44:40.811 STDOUT terraform:  + force_delete = false 2025-09-19 10:44:40.812484 | orchestrator | 10:44:40.811 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 10:44:40.812488 | orchestrator | 10:44:40.811 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.812492 | orchestrator | 10:44:40.811 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:40.812496 | orchestrator | 10:44:40.811 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 10:44:40.812500 | orchestrator | 10:44:40.811 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 10:44:40.812503 | orchestrator | 10:44:40.811 STDOUT terraform:  + name = "testbed-node-5" 2025-09-19 10:44:40.812507 | orchestrator | 10:44:40.811 STDOUT terraform:  + power_state = "active" 2025-09-19 10:44:40.812511 | orchestrator | 10:44:40.811 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.812515 | orchestrator | 10:44:40.811 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 10:44:40.812519 | orchestrator | 10:44:40.811 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 10:44:40.812523 | orchestrator | 10:44:40.811 STDOUT terraform:  + updated = (known after apply) 2025-09-19 10:44:40.812526 | orchestrator | 10:44:40.811 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 10:44:40.812530 | orchestrator | 10:44:40.811 STDOUT terraform:  + block_device { 2025-09-19 10:44:40.812537 | orchestrator | 10:44:40.811 STDOUT terraform:  + boot_index = 0 2025-09-19 10:44:40.812541 | orchestrator | 10:44:40.811 STDOUT terraform:  + delete_on_termination = false 2025-09-19 10:44:40.812545 | orchestrator | 10:44:40.811 STDOUT terraform:  + destination_type = "volume" 2025-09-19 10:44:40.812549 | orchestrator | 10:44:40.811 STDOUT terraform:  + multiattach = false 2025-09-19 10:44:40.812553 | orchestrator | 10:44:40.811 STDOUT terraform:  + source_type = "volume" 2025-09-19 10:44:40.812557 | orchestrator | 10:44:40.811 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:40.812560 | orchestrator | 10:44:40.811 STDOUT terraform:  } 2025-09-19 10:44:40.812564 | orchestrator | 10:44:40.811 STDOUT terraform:  + network { 2025-09-19 10:44:40.812568 | orchestrator | 10:44:40.811 STDOUT terraform:  + access_network = false 2025-09-19 10:44:40.812572 | orchestrator | 10:44:40.811 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 10:44:40.812576 | orchestrator | 10:44:40.811 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 10:44:40.812580 | orchestrator | 10:44:40.811 STDOUT terraform:  + mac = (known after apply) 2025-09-19 10:44:40.812584 | orchestrator | 10:44:40.811 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:40.812593 | orchestrator | 10:44:40.811 STDOUT terraform:  + port = (known after apply) 2025-09-19 10:44:40.812597 | orchestrator | 10:44:40.811 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:40.812601 | orchestrator | 10:44:40.811 STDOUT terraform:  } 2025-09-19 10:44:40.812605 | orchestrator | 10:44:40.811 STDOUT terraform:  } 2025-09-19 10:44:40.812609 | orchestrator | 10:44:40.811 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-19 10:44:40.812613 | orchestrator | 10:44:40.811 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-19 10:44:40.812617 | orchestrator | 10:44:40.812 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-19 10:44:40.812621 | orchestrator | 10:44:40.812 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.812625 | orchestrator | 10:44:40.812 STDOUT terraform:  + name = "testbed" 2025-09-19 10:44:40.812629 | orchestrator | 10:44:40.812 STDOUT terraform:  + private_key = (sensitive value) 2025-09-19 10:44:40.812633 | orchestrator | 10:44:40.812 STDOUT terraform:  + public_key = (known after apply) 2025-09-19 10:44:40.812636 | orchestrator | 10:44:40.812 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.812645 | orchestrator | 10:44:40.812 STDOUT terraform:  + user_id = (known after apply) 2025-09-19 10:44:40.812649 | orchestrator | 10:44:40.812 STDOUT terraform:  } 2025-09-19 10:44:40.812653 | orchestrator | 10:44:40.812 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-19 10:44:40.812658 | orchestrator | 10:44:40.812 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:40.812662 | orchestrator | 10:44:40.812 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:40.812666 | orchestrator | 10:44:40.812 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.812672 | orchestrator | 10:44:40.812 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:40.812676 | orchestrator | 10:44:40.812 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.812680 | orchestrator | 10:44:40.812 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:40.812684 | orchestrator | 10:44:40.812 STDOUT terraform:  } 2025-09-19 10:44:40.812688 | orchestrator | 10:44:40.812 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-19 10:44:40.812692 | orchestrator | 10:44:40.812 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:40.812696 | orchestrator | 10:44:40.812 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:40.812700 | orchestrator | 10:44:40.812 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.812703 | orchestrator | 10:44:40.812 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:40.812707 | orchestrator | 10:44:40.812 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.812713 | orchestrator | 10:44:40.812 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:40.812717 | orchestrator | 10:44:40.812 STDOUT terraform:  } 2025-09-19 10:44:40.812721 | orchestrator | 10:44:40.812 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-19 10:44:40.812725 | orchestrator | 10:44:40.812 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:40.812731 | orchestrator | 10:44:40.812 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:40.812744 | orchestrator | 10:44:40.812 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.812780 | orchestrator | 10:44:40.812 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:40.812799 | orchestrator | 10:44:40.812 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.812826 | orchestrator | 10:44:40.812 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:40.812833 | orchestrator | 10:44:40.812 STDOUT terraform:  } 2025-09-19 10:44:40.812894 | orchestrator | 10:44:40.812 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-19 10:44:40.812932 | orchestrator | 10:44:40.812 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:40.812960 | orchestrator | 10:44:40.812 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:40.812987 | orchestrator | 10:44:40.812 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.813022 | orchestrator | 10:44:40.812 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:40.813049 | orchestrator | 10:44:40.813 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.813075 | orchestrator | 10:44:40.813 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:40.813082 | orchestrator | 10:44:40.813 STDOUT terraform:  } 2025-09-19 10:44:40.813131 | orchestrator | 10:44:40.813 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-19 10:44:40.813178 | orchestrator | 10:44:40.813 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:40.813205 | orchestrator | 10:44:40.813 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:40.813233 | orchestrator | 10:44:40.813 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.813259 | orchestrator | 10:44:40.813 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:40.813287 | orchestrator | 10:44:40.813 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.813315 | orchestrator | 10:44:40.813 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:40.813321 | orchestrator | 10:44:40.813 STDOUT terraform:  } 2025-09-19 10:44:40.813372 | orchestrator | 10:44:40.813 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-19 10:44:40.813419 | orchestrator | 10:44:40.813 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:40.813447 | orchestrator | 10:44:40.813 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:40.813474 | orchestrator | 10:44:40.813 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.813501 | orchestrator | 10:44:40.813 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:40.813527 | orchestrator | 10:44:40.813 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.813555 | orchestrator | 10:44:40.813 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:40.813561 | orchestrator | 10:44:40.813 STDOUT terraform:  } 2025-09-19 10:44:40.813611 | orchestrator | 10:44:40.813 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-19 10:44:40.813658 | orchestrator | 10:44:40.813 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:40.813685 | orchestrator | 10:44:40.813 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:40.813712 | orchestrator | 10:44:40.813 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.813740 | orchestrator | 10:44:40.813 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:40.813767 | orchestrator | 10:44:40.813 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.813793 | orchestrator | 10:44:40.813 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:40.813800 | orchestrator | 10:44:40.813 STDOUT terraform:  } 2025-09-19 10:44:40.813849 | orchestrator | 10:44:40.813 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-19 10:44:40.813896 | orchestrator | 10:44:40.813 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:40.813924 | orchestrator | 10:44:40.813 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:40.813952 | orchestrator | 10:44:40.813 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.813978 | orchestrator | 10:44:40.813 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:40.814067 | orchestrator | 10:44:40.813 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.814080 | orchestrator | 10:44:40.814 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:40.814101 | orchestrator | 10:44:40.814 STDOUT terraform:  } 2025-09-19 10:44:40.814148 | orchestrator | 10:44:40.814 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-19 10:44:40.814194 | orchestrator | 10:44:40.814 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:40.814225 | orchestrator | 10:44:40.814 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:40.814250 | orchestrator | 10:44:40.814 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.814279 | orchestrator | 10:44:40.814 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:40.814307 | orchestrator | 10:44:40.814 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.814338 | orchestrator | 10:44:40.814 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:40.814345 | orchestrator | 10:44:40.814 STDOUT terraform:  } 2025-09-19 10:44:40.814404 | orchestrator | 10:44:40.814 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-19 10:44:40.814456 | orchestrator | 10:44:40.814 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-19 10:44:40.814483 | orchestrator | 10:44:40.814 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-19 10:44:40.814509 | orchestrator | 10:44:40.814 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-19 10:44:40.814536 | orchestrator | 10:44:40.814 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.814563 | orchestrator | 10:44:40.814 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 10:44:40.814590 | orchestrator | 10:44:40.814 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.814596 | orchestrator | 10:44:40.814 STDOUT terraform:  } 2025-09-19 10:44:40.814645 | orchestrator | 10:44:40.814 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-19 10:44:40.814691 | orchestrator | 10:44:40.814 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-19 10:44:40.814715 | orchestrator | 10:44:40.814 STDOUT terraform:  + address = (known after apply) 2025-09-19 10:44:40.814739 | orchestrator | 10:44:40.814 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.814763 | orchestrator | 10:44:40.814 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-19 10:44:40.814787 | orchestrator | 10:44:40.814 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 10:44:40.814810 | orchestrator | 10:44:40.814 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-19 10:44:40.814860 | orchestrator | 10:44:40.814 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.814867 | orchestrator | 10:44:40.814 STDOUT terraform:  + pool = "public" 2025-09-19 10:44:40.814898 | orchestrator | 10:44:40.814 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 10:44:40.814921 | orchestrator | 10:44:40.814 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.814946 | orchestrator | 10:44:40.814 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:40.814967 | orchestrator | 10:44:40.814 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.814973 | orchestrator | 10:44:40.814 STDOUT terraform:  } 2025-09-19 10:44:40.815154 | orchestrator | 10:44:40.814 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-19 10:44:40.815221 | orchestrator | 10:44:40.815 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-19 10:44:40.815234 | orchestrator | 10:44:40.815 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:40.815253 | orchestrator | 10:44:40.815 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.815264 | orchestrator | 10:44:40.815 STDOUT terraform:  + availability_zone_hints = [ 2025-09-19 10:44:40.815274 | orchestrator | 10:44:40.815 STDOUT terraform:  + "nova", 2025-09-19 10:44:40.815284 | orchestrator | 10:44:40.815 STDOUT terraform:  ] 2025-09-19 10:44:40.815294 | orchestrator | 10:44:40.815 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-19 10:44:40.815304 | orchestrator | 10:44:40.815 STDOUT terraform:  + external = (known after apply) 2025-09-19 10:44:40.815317 | orchestrator | 10:44:40.815 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.815327 | orchestrator | 10:44:40.815 STDOUT terraform:  + mtu = (known after apply) 2025-09-19 10:44:40.815340 | orchestrator | 10:44:40.815 STDOUT terraform:  + name = "net-testbed-management" 2025-09-19 10:44:40.815387 | orchestrator | 10:44:40.815 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 10:44:40.815401 | orchestrator | 10:44:40.815 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 10:44:40.815515 | orchestrator | 10:44:40.815 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.815536 | orchestrator | 10:44:40.815 STDOUT terraform:  + shared = (known after apply) 2025-09-19 10:44:40.815545 | orchestrator | 10:44:40.815 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.815550 | orchestrator | 10:44:40.815 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-19 10:44:40.815556 | orchestrator | 10:44:40.815 STDOUT terraform:  + segments (known after apply) 2025-09-19 10:44:40.815568 | orchestrator | 10:44:40.815 STDOUT terraform:  } 2025-09-19 10:44:40.815616 | orchestrator | 10:44:40.815 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-19 10:44:40.815663 | orchestrator | 10:44:40.815 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-19 10:44:40.815697 | orchestrator | 10:44:40.815 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:40.815732 | orchestrator | 10:44:40.815 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 10:44:40.815766 | orchestrator | 10:44:40.815 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 10:44:40.815801 | orchestrator | 10:44:40.815 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.815836 | orchestrator | 10:44:40.815 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 10:44:40.815871 | orchestrator | 10:44:40.815 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 10:44:40.815905 | orchestrator | 10:44:40.815 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 10:44:40.815940 | orchestrator | 10:44:40.815 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 10:44:40.815976 | orchestrator | 10:44:40.815 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.816022 | orchestrator | 10:44:40.815 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 10:44:40.816054 | orchestrator | 10:44:40.816 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 10:44:40.816089 | orchestrator | 10:44:40.816 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 10:44:40.816123 | orchestrator | 10:44:40.816 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 10:44:40.816157 | orchestrator | 10:44:40.816 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.816191 | orchestrator | 10:44:40.816 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 10:44:40.816227 | orchestrator | 10:44:40.816 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.816243 | orchestrator | 10:44:40.816 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.816272 | orchestrator | 10:44:40.816 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 10:44:40.816278 | orchestrator | 10:44:40.816 STDOUT terraform:  } 2025-09-19 10:44:40.816301 | orchestrator | 10:44:40.816 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.816327 | orchestrator | 10:44:40.816 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 10:44:40.816333 | orchestrator | 10:44:40.816 STDOUT terraform:  } 2025-09-19 10:44:40.816359 | orchestrator | 10:44:40.816 STDOUT terraform:  + binding (known after apply) 2025-09-19 10:44:40.816365 | orchestrator | 10:44:40.816 STDOUT terraform:  + fixed_ip { 2025-09-19 10:44:40.816392 | orchestrator | 10:44:40.816 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-19 10:44:40.816420 | orchestrator | 10:44:40.816 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:40.816426 | orchestrator | 10:44:40.816 STDOUT terraform:  } 2025-09-19 10:44:40.816442 | orchestrator | 10:44:40.816 STDOUT terraform:  } 2025-09-19 10:44:40.816485 | orchestrator | 10:44:40.816 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-19 10:44:40.816528 | orchestrator | 10:44:40.816 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 10:44:40.816564 | orchestrator | 10:44:40.816 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:40.816598 | orchestrator | 10:44:40.816 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 10:44:40.816631 | orchestrator | 10:44:40.816 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 10:44:40.816666 | orchestrator | 10:44:40.816 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.816701 | orchestrator | 10:44:40.816 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 10:44:40.816735 | orchestrator | 10:44:40.816 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 10:44:40.816770 | orchestrator | 10:44:40.816 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 10:44:40.816805 | orchestrator | 10:44:40.816 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 10:44:40.816841 | orchestrator | 10:44:40.816 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.816879 | orchestrator | 10:44:40.816 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 10:44:40.816911 | orchestrator | 10:44:40.816 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 10:44:40.816945 | orchestrator | 10:44:40.816 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 10:44:40.816979 | orchestrator | 10:44:40.816 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 10:44:40.817032 | orchestrator | 10:44:40.816 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.817055 | orchestrator | 10:44:40.817 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 10:44:40.817091 | orchestrator | 10:44:40.817 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.817107 | orchestrator | 10:44:40.817 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.817134 | orchestrator | 10:44:40.817 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 10:44:40.817140 | orchestrator | 10:44:40.817 STDOUT terraform:  } 2025-09-19 10:44:40.817162 | orchestrator | 10:44:40.817 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.817189 | orchestrator | 10:44:40.817 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 10:44:40.817195 | orchestrator | 10:44:40.817 STDOUT terraform:  } 2025-09-19 10:44:40.817217 | orchestrator | 10:44:40.817 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.817243 | orchestrator | 10:44:40.817 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 10:44:40.817249 | orchestrator | 10:44:40.817 STDOUT terraform:  } 2025-09-19 10:44:40.817271 | orchestrator | 10:44:40.817 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.817297 | orchestrator | 10:44:40.817 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 10:44:40.817304 | orchestrator | 10:44:40.817 STDOUT terraform:  } 2025-09-19 10:44:40.817329 | orchestrator | 10:44:40.817 STDOUT terraform:  + binding (known after apply) 2025-09-19 10:44:40.817335 | orchestrator | 10:44:40.817 STDOUT terraform:  + fixed_ip { 2025-09-19 10:44:40.817363 | orchestrator | 10:44:40.817 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-19 10:44:40.817391 | orchestrator | 10:44:40.817 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:40.817398 | orchestrator | 10:44:40.817 STDOUT terraform:  } 2025-09-19 10:44:40.817403 | orchestrator | 10:44:40.817 STDOUT terraform:  } 2025-09-19 10:44:40.817453 | orchestrator | 10:44:40.817 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-19 10:44:40.817497 | orchestrator | 10:44:40.817 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 10:44:40.817533 | orchestrator | 10:44:40.817 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:40.817567 | orchestrator | 10:44:40.817 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 10:44:40.817602 | orchestrator | 10:44:40.817 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 10:44:40.817639 | orchestrator | 10:44:40.817 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.817674 | orchestrator | 10:44:40.817 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 10:44:40.817708 | orchestrator | 10:44:40.817 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 10:44:40.817743 | orchestrator | 10:44:40.817 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 10:44:40.817778 | orchestrator | 10:44:40.817 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 10:44:40.817813 | orchestrator | 10:44:40.817 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.817848 | orchestrator | 10:44:40.817 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 10:44:40.817882 | orchestrator | 10:44:40.817 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 10:44:40.817916 | orchestrator | 10:44:40.817 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 10:44:40.817953 | orchestrator | 10:44:40.817 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 10:44:40.817988 | orchestrator | 10:44:40.817 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.818085 | orchestrator | 10:44:40.817 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 10:44:40.818115 | orchestrator | 10:44:40.818 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.818132 | orchestrator | 10:44:40.818 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.818157 | orchestrator | 10:44:40.818 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 10:44:40.818163 | orchestrator | 10:44:40.818 STDOUT terraform:  } 2025-09-19 10:44:40.818185 | orchestrator | 10:44:40.818 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.818213 | orchestrator | 10:44:40.818 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 10:44:40.818219 | orchestrator | 10:44:40.818 STDOUT terraform:  } 2025-09-19 10:44:40.818241 | orchestrator | 10:44:40.818 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.818277 | orchestrator | 10:44:40.818 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 10:44:40.818283 | orchestrator | 10:44:40.818 STDOUT terraform:  } 2025-09-19 10:44:40.818306 | orchestrator | 10:44:40.818 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.818332 | orchestrator | 10:44:40.818 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 10:44:40.818338 | orchestrator | 10:44:40.818 STDOUT terraform:  } 2025-09-19 10:44:40.818364 | orchestrator | 10:44:40.818 STDOUT terraform:  + binding (known after apply) 2025-09-19 10:44:40.818370 | orchestrator | 10:44:40.818 STDOUT terraform:  + fixed_ip { 2025-09-19 10:44:40.818398 | orchestrator | 10:44:40.818 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-19 10:44:40.818426 | orchestrator | 10:44:40.818 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:40.818432 | orchestrator | 10:44:40.818 STDOUT terraform:  } 2025-09-19 10:44:40.818438 | orchestrator | 10:44:40.818 STDOUT terraform:  } 2025-09-19 10:44:40.818489 | orchestrator | 10:44:40.818 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-19 10:44:40.818532 | orchestrator | 10:44:40.818 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 10:44:40.818566 | orchestrator | 10:44:40.818 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:40.818602 | orchestrator | 10:44:40.818 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 10:44:40.818636 | orchestrator | 10:44:40.818 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 10:44:40.821124 | orchestrator | 10:44:40.818 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.821159 | orchestrator | 10:44:40.821 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 10:44:40.821177 | orchestrator | 10:44:40.821 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 10:44:40.821190 | orchestrator | 10:44:40.821 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 10:44:40.821204 | orchestrator | 10:44:40.821 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 10:44:40.821238 | orchestrator | 10:44:40.821 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.821281 | orchestrator | 10:44:40.821 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 10:44:40.821295 | orchestrator | 10:44:40.821 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 10:44:40.821345 | orchestrator | 10:44:40.821 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 10:44:40.821359 | orchestrator | 10:44:40.821 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 10:44:40.821409 | orchestrator | 10:44:40.821 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.821424 | orchestrator | 10:44:40.821 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 10:44:40.821474 | orchestrator | 10:44:40.821 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.821489 | orchestrator | 10:44:40.821 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.821503 | orchestrator | 10:44:40.821 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 10:44:40.821517 | orchestrator | 10:44:40.821 STDOUT terraform:  } 2025-09-19 10:44:40.821530 | orchestrator | 10:44:40.821 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.821567 | orchestrator | 10:44:40.821 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 10:44:40.821578 | orchestrator | 10:44:40.821 STDOUT terraform:  } 2025-09-19 10:44:40.821591 | orchestrator | 10:44:40.821 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.821604 | orchestrator | 10:44:40.821 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 10:44:40.821636 | orchestrator | 10:44:40.821 STDOUT terraform:  } 2025-09-19 10:44:40.821646 | orchestrator | 10:44:40.821 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.821659 | orchestrator | 10:44:40.821 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 10:44:40.821668 | orchestrator | 10:44:40.821 STDOUT terraform:  } 2025-09-19 10:44:40.821681 | orchestrator | 10:44:40.821 STDOUT terraform:  + binding (known after apply) 2025-09-19 10:44:40.821694 | orchestrator | 10:44:40.821 STDOUT terraform:  + fixed_ip { 2025-09-19 10:44:40.821706 | orchestrator | 10:44:40.821 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-19 10:44:40.821744 | orchestrator | 10:44:40.821 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:40.821755 | orchestrator | 10:44:40.821 STDOUT terraform:  } 2025-09-19 10:44:40.821768 | orchestrator | 10:44:40.821 STDOUT terraform:  } 2025-09-19 10:44:40.821804 | orchestrator | 10:44:40.821 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-19 10:44:40.821852 | orchestrator | 10:44:40.821 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 10:44:40.821867 | orchestrator | 10:44:40.821 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:40.821917 | orchestrator | 10:44:40.821 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 10:44:40.821932 | orchestrator | 10:44:40.821 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 10:44:40.821983 | orchestrator | 10:44:40.821 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.822065 | orchestrator | 10:44:40.821 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 10:44:40.822080 | orchestrator | 10:44:40.822 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 10:44:40.822110 | orchestrator | 10:44:40.822 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 10:44:40.822123 | orchestrator | 10:44:40.822 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 10:44:40.822170 | orchestrator | 10:44:40.822 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.822198 | orchestrator | 10:44:40.822 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 10:44:40.822295 | orchestrator | 10:44:40.822 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 10:44:40.822306 | orchestrator | 10:44:40.822 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 10:44:40.822315 | orchestrator | 10:44:40.822 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 10:44:40.822328 | orchestrator | 10:44:40.822 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.822355 | orchestrator | 10:44:40.822 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 10:44:40.822393 | orchestrator | 10:44:40.822 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.822408 | orchestrator | 10:44:40.822 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.822421 | orchestrator | 10:44:40.822 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 10:44:40.822443 | orchestrator | 10:44:40.822 STDOUT terraform:  } 2025-09-19 10:44:40.822456 | orchestrator | 10:44:40.822 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.822469 | orchestrator | 10:44:40.822 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 10:44:40.822482 | orchestrator | 10:44:40.822 STDOUT terraform:  } 2025-09-19 10:44:40.822494 | orchestrator | 10:44:40.822 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.822530 | orchestrator | 10:44:40.822 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 10:44:40.822545 | orchestrator | 10:44:40.822 STDOUT terraform:  } 2025-09-19 10:44:40.822555 | orchestrator | 10:44:40.822 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.822567 | orchestrator | 10:44:40.822 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 10:44:40.822580 | orchestrator | 10:44:40.822 STDOUT terraform:  } 2025-09-19 10:44:40.822593 | orchestrator | 10:44:40.822 STDOUT terraform:  + binding (known after apply) 2025-09-19 10:44:40.822606 | orchestrator | 10:44:40.822 STDOUT terraform:  + fixed_ip { 2025-09-19 10:44:40.822639 | orchestrator | 10:44:40.822 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-19 10:44:40.822654 | orchestrator | 10:44:40.822 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:40.822667 | orchestrator | 10:44:40.822 STDOUT terraform:  } 2025-09-19 10:44:40.822680 | orchestrator | 10:44:40.822 STDOUT terraform:  } 2025-09-19 10:44:40.830444 | orchestrator | 10:44:40.822 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-19 10:44:40.830495 | orchestrator | 10:44:40.822 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 10:44:40.830507 | orchestrator | 10:44:40.822 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:40.830517 | orchestrator | 10:44:40.822 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 10:44:40.830527 | orchestrator | 10:44:40.822 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 10:44:40.830537 | orchestrator | 10:44:40.822 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.830566 | orchestrator | 10:44:40.822 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 10:44:40.830577 | orchestrator | 10:44:40.822 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 10:44:40.830587 | orchestrator | 10:44:40.822 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 10:44:40.830596 | orchestrator | 10:44:40.822 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 10:44:40.830606 | orchestrator | 10:44:40.822 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.830615 | orchestrator | 10:44:40.823 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 10:44:40.830625 | orchestrator | 10:44:40.823 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 10:44:40.830643 | orchestrator | 10:44:40.823 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 10:44:40.830667 | orchestrator | 10:44:40.823 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 10:44:40.830677 | orchestrator | 10:44:40.823 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.830687 | orchestrator | 10:44:40.823 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 10:44:40.830696 | orchestrator | 10:44:40.823 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.830706 | orchestrator | 10:44:40.823 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.830716 | orchestrator | 10:44:40.823 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 10:44:40.830726 | orchestrator | 10:44:40.823 STDOUT terraform:  } 2025-09-19 10:44:40.830737 | orchestrator | 10:44:40.823 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.830746 | orchestrator | 10:44:40.823 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 10:44:40.830756 | orchestrator | 10:44:40.823 STDOUT terraform:  } 2025-09-19 10:44:40.830766 | orchestrator | 10:44:40.823 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.830775 | orchestrator | 10:44:40.823 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 10:44:40.830785 | orchestrator | 10:44:40.823 STDOUT terraform:  } 2025-09-19 10:44:40.830795 | orchestrator | 10:44:40.823 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.830805 | orchestrator | 10:44:40.823 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 10:44:40.830814 | orchestrator | 10:44:40.823 STDOUT terraform:  } 2025-09-19 10:44:40.830824 | orchestrator | 10:44:40.823 STDOUT terraform:  + binding (known after apply) 2025-09-19 10:44:40.830834 | orchestrator | 10:44:40.823 STDOUT terraform:  + fixed_ip { 2025-09-19 10:44:40.830843 | orchestrator | 10:44:40.823 STDOUT terraform:  + ip_ad 2025-09-19 10:44:40.830853 | orchestrator | 10:44:40.823 STDOUT terraform: dress = "192.168.16.14" 2025-09-19 10:44:40.830862 | orchestrator | 10:44:40.823 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:40.830872 | orchestrator | 10:44:40.823 STDOUT terraform:  } 2025-09-19 10:44:40.830882 | orchestrator | 10:44:40.823 STDOUT terraform:  } 2025-09-19 10:44:40.830892 | orchestrator | 10:44:40.823 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-19 10:44:40.830912 | orchestrator | 10:44:40.823 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 10:44:40.830922 | orchestrator | 10:44:40.823 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:40.830932 | orchestrator | 10:44:40.823 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 10:44:40.830942 | orchestrator | 10:44:40.823 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 10:44:40.830952 | orchestrator | 10:44:40.823 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.830962 | orchestrator | 10:44:40.823 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 10:44:40.830971 | orchestrator | 10:44:40.823 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 10:44:40.830981 | orchestrator | 10:44:40.823 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 10:44:40.830997 | orchestrator | 10:44:40.823 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 10:44:40.831027 | orchestrator | 10:44:40.823 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.831037 | orchestrator | 10:44:40.823 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 10:44:40.831047 | orchestrator | 10:44:40.823 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 10:44:40.831057 | orchestrator | 10:44:40.823 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 10:44:40.831066 | orchestrator | 10:44:40.824 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 10:44:40.831076 | orchestrator | 10:44:40.824 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.831085 | orchestrator | 10:44:40.824 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 10:44:40.831095 | orchestrator | 10:44:40.824 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.831105 | orchestrator | 10:44:40.824 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.831114 | orchestrator | 10:44:40.824 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 10:44:40.831124 | orchestrator | 10:44:40.824 STDOUT terraform:  } 2025-09-19 10:44:40.831134 | orchestrator | 10:44:40.824 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.831144 | orchestrator | 10:44:40.824 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 10:44:40.831153 | orchestrator | 10:44:40.824 STDOUT terraform:  } 2025-09-19 10:44:40.831163 | orchestrator | 10:44:40.824 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.831173 | orchestrator | 10:44:40.824 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 10:44:40.831183 | orchestrator | 10:44:40.824 STDOUT terraform:  } 2025-09-19 10:44:40.831192 | orchestrator | 10:44:40.824 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:40.831202 | orchestrator | 10:44:40.824 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 10:44:40.831212 | orchestrator | 10:44:40.824 STDOUT terraform:  } 2025-09-19 10:44:40.831221 | orchestrator | 10:44:40.824 STDOUT terraform:  + binding (known after apply) 2025-09-19 10:44:40.831231 | orchestrator | 10:44:40.824 STDOUT terraform:  + fixed_ip { 2025-09-19 10:44:40.831241 | orchestrator | 10:44:40.824 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-19 10:44:40.831251 | orchestrator | 10:44:40.824 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:40.831260 | orchestrator | 10:44:40.824 STDOUT terraform:  } 2025-09-19 10:44:40.831270 | orchestrator | 10:44:40.824 STDOUT terraform:  } 2025-09-19 10:44:40.831280 | orchestrator | 10:44:40.824 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-19 10:44:40.831290 | orchestrator | 10:44:40.824 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-19 10:44:40.831299 | orchestrator | 10:44:40.824 STDOUT terraform:  + force_destroy = false 2025-09-19 10:44:40.831309 | orchestrator | 10:44:40.824 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.831396 | orchestrator | 10:44:40.824 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 10:44:40.831407 | orchestrator | 10:44:40.824 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.831416 | orchestrator | 10:44:40.824 STDOUT terraform:  + router_id = (known after apply) 2025-09-19 10:44:40.831426 | orchestrator | 10:44:40.824 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:40.831436 | orchestrator | 10:44:40.824 STDOUT terraform:  } 2025-09-19 10:44:40.831446 | orchestrator | 10:44:40.824 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-19 10:44:40.831456 | orchestrator | 10:44:40.824 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-19 10:44:40.831466 | orchestrator | 10:44:40.824 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:40.831475 | orchestrator | 10:44:40.824 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.831488 | orchestrator | 10:44:40.824 STDOUT terraform:  + availability_zone_hints = [ 2025-09-19 10:44:40.831499 | orchestrator | 10:44:40.824 STDOUT terraform:  + "nova", 2025-09-19 10:44:40.831508 | orchestrator | 10:44:40.824 STDOUT terraform:  ] 2025-09-19 10:44:40.831518 | orchestrator | 10:44:40.824 STDOUT terraform:  + distributed = (known after apply) 2025-09-19 10:44:40.831528 | orchestrator | 10:44:40.824 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-19 10:44:40.831545 | orchestrator | 10:44:40.824 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-19 10:44:40.831559 | orchestrator | 10:44:40.824 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-19 10:44:40.831569 | orchestrator | 10:44:40.824 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.831579 | orchestrator | 10:44:40.824 STDOUT terraform:  + name = "testbed" 2025-09-19 10:44:40.831589 | orchestrator | 10:44:40.824 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.831599 | orchestrator | 10:44:40.825 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.831608 | orchestrator | 10:44:40.825 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-19 10:44:40.831618 | orchestrator | 10:44:40.825 STDOUT terraform:  } 2025-09-19 10:44:40.831628 | orchestrator | 10:44:40.825 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-19 10:44:40.831639 | orchestrator | 10:44:40.825 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-19 10:44:40.831649 | orchestrator | 10:44:40.825 STDOUT terraform:  + description = "ssh" 2025-09-19 10:44:40.831659 | orchestrator | 10:44:40.825 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:40.831668 | orchestrator | 10:44:40.825 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:40.831678 | orchestrator | 10:44:40.825 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.831688 | orchestrator | 10:44:40.825 STDOUT terraform:  + port_range_max = 22 2025-09-19 10:44:40.831704 | orchestrator | 10:44:40.825 STDOUT terraform:  + port_range_min = 22 2025-09-19 10:44:40.831714 | orchestrator | 10:44:40.825 STDOUT terraform:  + protocol = "tcp" 2025-09-19 10:44:40.831724 | orchestrator | 10:44:40.825 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.831734 | orchestrator | 10:44:40.825 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:40.831743 | orchestrator | 10:44:40.825 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:40.831753 | orchestrator | 10:44:40.825 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 10:44:40.831762 | orchestrator | 10:44:40.825 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:40.831772 | orchestrator | 10:44:40.825 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.831788 | orchestrator | 10:44:40.825 STDOUT terraform:  } 2025-09-19 10:44:40.831798 | orchestrator | 10:44:40.825 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-19 10:44:40.831808 | orchestrator | 10:44:40.825 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-19 10:44:40.831817 | orchestrator | 10:44:40.825 STDOUT terraform:  + description = "wireguard" 2025-09-19 10:44:40.831827 | orchestrator | 10:44:40.825 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:40.831837 | orchestrator | 10:44:40.825 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:40.831846 | orchestrator | 10:44:40.825 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.831856 | orchestrator | 10:44:40.825 STDOUT terraform:  + port_range_max = 51820 2025-09-19 10:44:40.831866 | orchestrator | 10:44:40.825 STDOUT terraform:  + port_range_min = 51820 2025-09-19 10:44:40.831875 | orchestrator | 10:44:40.825 STDOUT terraform:  + protocol = "udp" 2025-09-19 10:44:40.831885 | orchestrator | 10:44:40.825 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.831894 | orchestrator | 10:44:40.825 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:40.831904 | orchestrator | 10:44:40.825 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:40.831914 | orchestrator | 10:44:40.827 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 10:44:40.831927 | orchestrator | 10:44:40.827 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:40.831937 | orchestrator | 10:44:40.827 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.831947 | orchestrator | 10:44:40.827 STDOUT terraform:  } 2025-09-19 10:44:40.831957 | orchestrator | 10:44:40.827 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-19 10:44:40.831967 | orchestrator | 10:44:40.827 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-19 10:44:40.831976 | orchestrator | 10:44:40.827 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:40.831986 | orchestrator | 10:44:40.827 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:40.832002 | orchestrator | 10:44:40.827 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.832067 | orchestrator | 10:44:40.827 STDOUT terraform:  + protocol = "tcp" 2025-09-19 10:44:40.832078 | orchestrator | 10:44:40.827 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.832087 | orchestrator | 10:44:40.827 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:40.832097 | orchestrator | 10:44:40.827 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:40.832107 | orchestrator | 10:44:40.827 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-19 10:44:40.832117 | orchestrator | 10:44:40.827 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:40.832126 | orchestrator | 10:44:40.827 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.832136 | orchestrator | 10:44:40.827 STDOUT terraform:  } 2025-09-19 10:44:40.832146 | orchestrator | 10:44:40.827 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-19 10:44:40.832156 | orchestrator | 10:44:40.827 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-19 10:44:40.832166 | orchestrator | 10:44:40.827 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:40.832175 | orchestrator | 10:44:40.827 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:40.832185 | orchestrator | 10:44:40.827 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.832201 | orchestrator | 10:44:40.827 STDOUT terraform:  + protocol = "udp" 2025-09-19 10:44:40.832211 | orchestrator | 10:44:40.827 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.832221 | orchestrator | 10:44:40.827 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:40.832230 | orchestrator | 10:44:40.827 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:40.832240 | orchestrator | 10:44:40.828 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-19 10:44:40.832249 | orchestrator | 10:44:40.828 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:40.832259 | orchestrator | 10:44:40.828 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.832268 | orchestrator | 10:44:40.828 STDOUT terraform:  } 2025-09-19 10:44:40.832278 | orchestrator | 10:44:40.828 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-19 10:44:40.832288 | orchestrator | 10:44:40.828 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-19 10:44:40.832297 | orchestrator | 10:44:40.828 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:40.832361 | orchestrator | 10:44:40.828 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:40.832372 | orchestrator | 10:44:40.828 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.832382 | orchestrator | 10:44:40.828 STDOUT terraform:  + protocol = "icmp" 2025-09-19 10:44:40.832403 | orchestrator | 10:44:40.828 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.832413 | orchestrator | 10:44:40.828 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:40.832423 | orchestrator | 10:44:40.828 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:40.832432 | orchestrator | 10:44:40.828 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 10:44:40.832442 | orchestrator | 10:44:40.828 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:40.832452 | orchestrator | 10:44:40.828 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.832461 | orchestrator | 10:44:40.828 STDOUT terraform:  } 2025-09-19 10:44:40.832471 | orchestrator | 10:44:40.828 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-19 10:44:40.832481 | orchestrator | 10:44:40.828 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-19 10:44:40.832507 | orchestrator | 10:44:40.828 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:40.832518 | orchestrator | 10:44:40.828 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:40.832527 | orchestrator | 10:44:40.828 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.832537 | orchestrator | 10:44:40.828 STDOUT terraform:  + protocol = "tcp" 2025-09-19 10:44:40.832546 | orchestrator | 10:44:40.828 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.832555 | orchestrator | 10:44:40.828 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:40.832565 | orchestrator | 10:44:40.828 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:40.832574 | orchestrator | 10:44:40.828 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 10:44:40.832584 | orchestrator | 10:44:40.828 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:40.832593 | orchestrator | 10:44:40.828 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.832601 | orchestrator | 10:44:40.828 STDOUT terraform:  } 2025-09-19 10:44:40.832609 | orchestrator | 10:44:40.828 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-19 10:44:40.832622 | orchestrator | 10:44:40.828 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-19 10:44:40.832630 | orchestrator | 10:44:40.829 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:40.832638 | orchestrator | 10:44:40.829 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:40.832646 | orchestrator | 10:44:40.829 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.832654 | orchestrator | 10:44:40.829 STDOUT terraform:  + protocol = "udp" 2025-09-19 10:44:40.832661 | orchestrator | 10:44:40.829 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.832669 | orchestrator | 10:44:40.829 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:40.832687 | orchestrator | 10:44:40.829 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:40.832694 | orchestrator | 10:44:40.829 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 10:44:40.832702 | orchestrator | 10:44:40.829 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:40.832710 | orchestrator | 10:44:40.829 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.832718 | orchestrator | 10:44:40.829 STDOUT terraform:  } 2025-09-19 10:44:40.832726 | orchestrator | 10:44:40.829 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-19 10:44:40.832734 | orchestrator | 10:44:40.829 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-19 10:44:40.832742 | orchestrator | 10:44:40.829 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:40.832749 | orchestrator | 10:44:40.829 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:40.832757 | orchestrator | 10:44:40.829 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.832765 | orchestrator | 10:44:40.829 STDOUT terraform:  + protocol = "icmp" 2025-09-19 10:44:40.832773 | orchestrator | 10:44:40.829 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.832780 | orchestrator | 10:44:40.829 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:40.832788 | orchestrator | 10:44:40.829 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:40.832796 | orchestrator | 10:44:40.829 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 10:44:40.832804 | orchestrator | 10:44:40.829 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:40.832812 | orchestrator | 10:44:40.829 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.832819 | orchestrator | 10:44:40.829 STDOUT terraform:  } 2025-09-19 10:44:40.832827 | orchestrator | 10:44:40.829 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-19 10:44:40.832835 | orchestrator | 10:44:40.829 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-19 10:44:40.832843 | orchestrator | 10:44:40.829 STDOUT terraform:  + description = "vrrp" 2025-09-19 10:44:40.832876 | orchestrator | 10:44:40.829 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:40.832885 | orchestrator | 10:44:40.829 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:40.832892 | orchestrator | 10:44:40.829 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.832900 | orchestrator | 10:44:40.829 STDOUT terraform:  + protocol = "112" 2025-09-19 10:44:40.832908 | orchestrator | 10:44:40.829 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.832916 | orchestrator | 10:44:40.829 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:40.832924 | orchestrator | 10:44:40.829 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:40.832931 | orchestrator | 10:44:40.830 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 10:44:40.832949 | orchestrator | 10:44:40.830 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:40.832957 | orchestrator | 10:44:40.830 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.832965 | orchestrator | 10:44:40.830 STDOUT terraform:  } 2025-09-19 10:44:40.832972 | orchestrator | 10:44:40.830 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-19 10:44:40.832980 | orchestrator | 10:44:40.830 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-19 10:44:40.832988 | orchestrator | 10:44:40.830 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.832996 | orchestrator | 10:44:40.830 STDOUT terraform:  + description = "management security group" 2025-09-19 10:44:40.833017 | orchestrator | 10:44:40.830 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.833025 | orchestrator | 10:44:40.830 STDOUT terraform:  + name = "testbed-management" 2025-09-19 10:44:40.833033 | orchestrator | 10:44:40.830 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.833041 | orchestrator | 10:44:40.830 STDOUT terraform:  + stateful = (known after apply) 2025-09-19 10:44:40.833049 | orchestrator | 10:44:40.830 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.833057 | orchestrator | 10:44:40.830 STDOUT terraform:  } 2025-09-19 10:44:40.833065 | orchestrator | 10:44:40.830 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-19 10:44:40.833076 | orchestrator | 10:44:40.830 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-19 10:44:40.833084 | orchestrator | 10:44:40.830 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.833092 | orchestrator | 10:44:40.830 STDOUT terraform:  + description = "node security group" 2025-09-19 10:44:40.833100 | orchestrator | 10:44:40.830 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.833108 | orchestrator | 10:44:40.830 STDOUT terraform:  + name = "testbed-node" 2025-09-19 10:44:40.833115 | orchestrator | 10:44:40.830 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.833123 | orchestrator | 10:44:40.830 STDOUT terraform:  + stateful = (known after apply) 2025-09-19 10:44:40.833131 | orchestrator | 10:44:40.830 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.833139 | orchestrator | 10:44:40.830 STDOUT terraform:  } 2025-09-19 10:44:40.833147 | orchestrator | 10:44:40.830 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-19 10:44:40.833154 | orchestrator | 10:44:40.830 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-19 10:44:40.833162 | orchestrator | 10:44:40.830 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:40.833170 | orchestrator | 10:44:40.830 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-19 10:44:40.833178 | orchestrator | 10:44:40.830 STDOUT terraform:  + dns_nameservers = [ 2025-09-19 10:44:40.833186 | orchestrator | 10:44:40.830 STDOUT terraform:  + "8.8.8.8", 2025-09-19 10:44:40.833200 | orchestrator | 10:44:40.830 STDOUT terraform:  + "9.9.9.9", 2025-09-19 10:44:40.833208 | orchestrator | 10:44:40.830 STDOUT terraform:  ] 2025-09-19 10:44:40.833216 | orchestrator | 10:44:40.830 STDOUT terraform:  + enable_dhcp = true 2025-09-19 10:44:40.833223 | orchestrator | 10:44:40.830 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-19 10:44:40.833231 | orchestrator | 10:44:40.830 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.833239 | orchestrator | 10:44:40.830 STDOUT terraform:  + ip_version = 4 2025-09-19 10:44:40.833247 | orchestrator | 10:44:40.830 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-19 10:44:40.833255 | orchestrator | 10:44:40.830 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-19 10:44:40.833263 | orchestrator | 10:44:40.830 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-19 10:44:40.833275 | orchestrator | 10:44:40.830 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 10:44:40.833283 | orchestrator | 10:44:40.830 STDOUT terraform:  + no_gateway = false 2025-09-19 10:44:40.833291 | orchestrator | 10:44:40.830 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:40.833299 | orchestrator | 10:44:40.831 STDOUT terraform:  + service_types = (known after apply) 2025-09-19 10:44:40.833306 | orchestrator | 10:44:40.831 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:40.833314 | orchestrator | 10:44:40.831 STDOUT terraform:  + allocation_pool { 2025-09-19 10:44:40.833322 | orchestrator | 10:44:40.831 STDOUT terraform:  + end = "192.168.31.250" 2025-09-19 10:44:40.833330 | orchestrator | 10:44:40.831 STDOUT terraform:  + start = "192.168.31.200" 2025-09-19 10:44:40.833337 | orchestrator | 10:44:40.831 STDOUT terraform:  } 2025-09-19 10:44:40.833345 | orchestrator | 10:44:40.831 STDOUT terraform:  } 2025-09-19 10:44:40.833353 | orchestrator | 10:44:40.831 STDOUT terraform:  # terraform_data.image will be created 2025-09-19 10:44:40.833361 | orchestrator | 10:44:40.831 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-19 10:44:40.833369 | orchestrator | 10:44:40.831 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.833377 | orchestrator | 10:44:40.831 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-19 10:44:40.833385 | orchestrator | 10:44:40.831 STDOUT terraform:  + output = (known after apply) 2025-09-19 10:44:40.833392 | orchestrator | 10:44:40.831 STDOUT terraform:  } 2025-09-19 10:44:40.833425 | orchestrator | 10:44:40.831 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-19 10:44:40.833433 | orchestrator | 10:44:40.831 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-19 10:44:40.833441 | orchestrator | 10:44:40.831 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:40.833449 | orchestrator | 10:44:40.831 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-19 10:44:40.833456 | orchestrator | 10:44:40.831 STDOUT terraform:  + output = (known after apply) 2025-09-19 10:44:40.833464 | orchestrator | 10:44:40.831 STDOUT terraform:  } 2025-09-19 10:44:40.833472 | orchestrator | 10:44:40.831 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-19 10:44:40.833485 | orchestrator | 10:44:40.831 STDOUT terraform: Changes to Outputs: 2025-09-19 10:44:40.833493 | orchestrator | 10:44:40.831 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-19 10:44:40.833501 | orchestrator | 10:44:40.831 STDOUT terraform:  + private_key = (sensitive value) 2025-09-19 10:44:41.020261 | orchestrator | 10:44:41.020 STDOUT terraform: terraform_data.image: Creating... 2025-09-19 10:44:41.020346 | orchestrator | 10:44:41.020 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-19 10:44:41.020371 | orchestrator | 10:44:41.020 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=71a62d25-8922-2c80-1d34-db8003c312e5] 2025-09-19 10:44:41.020383 | orchestrator | 10:44:41.020 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=40c34701-6f8f-6590-2455-37143d665872] 2025-09-19 10:44:41.042516 | orchestrator | 10:44:41.039 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-19 10:44:41.049330 | orchestrator | 10:44:41.049 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-19 10:44:41.050163 | orchestrator | 10:44:41.049 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-19 10:44:41.061951 | orchestrator | 10:44:41.061 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-19 10:44:41.062136 | orchestrator | 10:44:41.061 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-19 10:44:41.062157 | orchestrator | 10:44:41.061 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-19 10:44:41.062169 | orchestrator | 10:44:41.061 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-19 10:44:41.062179 | orchestrator | 10:44:41.062 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-19 10:44:41.062189 | orchestrator | 10:44:41.062 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-19 10:44:41.064560 | orchestrator | 10:44:41.064 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-19 10:44:41.490140 | orchestrator | 10:44:41.489 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-19 10:44:41.495530 | orchestrator | 10:44:41.493 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-19 10:44:41.495610 | orchestrator | 10:44:41.494 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-19 10:44:41.499952 | orchestrator | 10:44:41.499 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-19 10:44:41.533244 | orchestrator | 10:44:41.533 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-09-19 10:44:41.536665 | orchestrator | 10:44:41.536 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-19 10:44:42.079705 | orchestrator | 10:44:42.079 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=e3e5fb0f-10b5-4e14-9ade-9986f0d39694] 2025-09-19 10:44:42.082736 | orchestrator | 10:44:42.082 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-19 10:44:44.641458 | orchestrator | 10:44:44.641 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=39dbe9ae-8bf0-4e12-9ca8-c59aebdbd1f7] 2025-09-19 10:44:44.654280 | orchestrator | 10:44:44.654 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-19 10:44:44.657197 | orchestrator | 10:44:44.657 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=02d4d70c-9632-40cc-9453-c0d53d6148ed] 2025-09-19 10:44:44.663295 | orchestrator | 10:44:44.662 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-19 10:44:44.687516 | orchestrator | 10:44:44.686 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=29dd875d-2efb-4f11-ac43-6353645f7e36] 2025-09-19 10:44:44.692693 | orchestrator | 10:44:44.692 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-19 10:44:44.695557 | orchestrator | 10:44:44.695 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=adddc9ff-e41b-477e-a261-fe5fa77d3a0f] 2025-09-19 10:44:44.700475 | orchestrator | 10:44:44.700 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-19 10:44:44.712274 | orchestrator | 10:44:44.712 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=93b11a5e-f517-4b3c-9813-3ed2f0fa6238] 2025-09-19 10:44:44.716684 | orchestrator | 10:44:44.716 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=53ba9bad-d72e-4bb6-9573-8eecfdb7d8b6] 2025-09-19 10:44:44.721447 | orchestrator | 10:44:44.721 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-19 10:44:44.723422 | orchestrator | 10:44:44.723 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-19 10:44:44.761052 | orchestrator | 10:44:44.760 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=b4727c68-ff73-4ff9-aa8c-694157ecb2dd] 2025-09-19 10:44:44.779810 | orchestrator | 10:44:44.778 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-19 10:44:44.779877 | orchestrator | 10:44:44.778 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=3322ab10-28f2-47f3-9821-bfcea3cb9d1d] 2025-09-19 10:44:44.779892 | orchestrator | 10:44:44.779 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=14764732-c430-42d5-be90-4134a981fa59] 2025-09-19 10:44:44.784216 | orchestrator | 10:44:44.784 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=dfbb32b5df0e0fe95d1d797da60974a15c86ce6b] 2025-09-19 10:44:44.789751 | orchestrator | 10:44:44.789 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-19 10:44:44.807595 | orchestrator | 10:44:44.807 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-19 10:44:44.813809 | orchestrator | 10:44:44.813 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=90938e654218df3f212b0a11d9c8c3b7db0ffef8] 2025-09-19 10:44:45.407526 | orchestrator | 10:44:45.407 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=09613f79-b5a9-459c-8665-9206125e2c07] 2025-09-19 10:44:45.756129 | orchestrator | 10:44:45.755 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=98a2a07c-f540-4098-9438-a5f51eca870f] 2025-09-19 10:44:45.761999 | orchestrator | 10:44:45.761 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-19 10:44:48.056036 | orchestrator | 10:44:48.055 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=1681d7ca-7745-4fd2-bcb7-23c40da03ace] 2025-09-19 10:44:48.074332 | orchestrator | 10:44:48.074 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=df0ee8f0-2c97-4afd-b0d1-816770dc0851] 2025-09-19 10:44:48.115098 | orchestrator | 10:44:48.114 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=42505943-ab11-4a68-89b8-1d4f3cc4dc03] 2025-09-19 10:44:48.126695 | orchestrator | 10:44:48.126 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=cb314818-0d8c-4ce7-852a-bbdb7b6af0f7] 2025-09-19 10:44:48.153655 | orchestrator | 10:44:48.153 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=0f197da4-9977-4ed0-ade0-de83f43b89ba] 2025-09-19 10:44:48.170452 | orchestrator | 10:44:48.170 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=ffa378a0-c75b-4616-81d3-b00e624d57d0] 2025-09-19 10:44:48.561380 | orchestrator | 10:44:48.560 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=bafd5cdb-eccd-46fe-9d48-150a8b6d72dc] 2025-09-19 10:44:48.574397 | orchestrator | 10:44:48.573 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-19 10:44:48.574471 | orchestrator | 10:44:48.573 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-19 10:44:48.575712 | orchestrator | 10:44:48.574 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-19 10:44:48.764584 | orchestrator | 10:44:48.764 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=fe1d2973-bfda-4b7f-b496-10917745fd32] 2025-09-19 10:44:48.774342 | orchestrator | 10:44:48.774 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-19 10:44:48.775063 | orchestrator | 10:44:48.774 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-19 10:44:48.775666 | orchestrator | 10:44:48.775 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-19 10:44:48.778404 | orchestrator | 10:44:48.778 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-19 10:44:48.779260 | orchestrator | 10:44:48.779 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-19 10:44:48.779935 | orchestrator | 10:44:48.779 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-19 10:44:48.944717 | orchestrator | 10:44:48.944 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=420a0c61-20f3-4315-a771-eb96ba0b04b8] 2025-09-19 10:44:49.056646 | orchestrator | 10:44:49.056 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=ed72adc6-0fa9-4c5b-af6c-9f3b5d5927ff] 2025-09-19 10:44:49.072917 | orchestrator | 10:44:49.072 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-19 10:44:49.074826 | orchestrator | 10:44:49.074 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-19 10:44:49.076030 | orchestrator | 10:44:49.075 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-19 10:44:49.078626 | orchestrator | 10:44:49.078 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-19 10:44:49.084134 | orchestrator | 10:44:49.083 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=0f634253-cfa6-4c0f-a72d-733e198a8e36] 2025-09-19 10:44:49.099542 | orchestrator | 10:44:49.099 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-19 10:44:49.241472 | orchestrator | 10:44:49.241 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=0de429ce-3c53-4b26-ac70-026e38868c4c] 2025-09-19 10:44:49.257326 | orchestrator | 10:44:49.257 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-19 10:44:49.297464 | orchestrator | 10:44:49.297 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=eeb535f7-cecb-4723-9bd0-2870c6a466f2] 2025-09-19 10:44:49.313601 | orchestrator | 10:44:49.313 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-19 10:44:49.435035 | orchestrator | 10:44:49.434 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=d471fe81-502d-4a82-8210-c9f283e8c6e3] 2025-09-19 10:44:49.450569 | orchestrator | 10:44:49.450 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-19 10:44:49.494371 | orchestrator | 10:44:49.493 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=a80609c0-90b6-4f87-b8af-314b7a9ca27f] 2025-09-19 10:44:49.509840 | orchestrator | 10:44:49.509 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-19 10:44:49.618587 | orchestrator | 10:44:49.618 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=eed8c6dd-f818-4535-9527-2b10024ef36a] 2025-09-19 10:44:49.623655 | orchestrator | 10:44:49.623 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-19 10:44:49.658271 | orchestrator | 10:44:49.657 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=b1385f30-bf5d-43a8-a678-a51fd6ddffa1] 2025-09-19 10:44:49.807843 | orchestrator | 10:44:49.807 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=6989f155-dc57-45ba-ad86-feb93a4805e6] 2025-09-19 10:44:49.858467 | orchestrator | 10:44:49.858 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=2eb10a18-0d66-4452-94be-514006e2d3f8] 2025-09-19 10:44:49.931173 | orchestrator | 10:44:49.930 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=e998afa3-3724-4a21-8cac-a69b13a65e05] 2025-09-19 10:44:49.949758 | orchestrator | 10:44:49.949 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=ab53d9b6-e9c9-4b71-a534-0562fd51d062] 2025-09-19 10:44:50.071364 | orchestrator | 10:44:50.071 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=ed561693-eaa4-418e-8bd8-ad83bd284631] 2025-09-19 10:44:50.103899 | orchestrator | 10:44:50.103 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=59746061-37dd-48f7-8db4-8a3b354c27c0] 2025-09-19 10:44:50.215774 | orchestrator | 10:44:50.215 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=d7d53295-7c1f-4a85-85ce-0e77df4f6b3a] 2025-09-19 10:44:50.772650 | orchestrator | 10:44:50.772 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=982c5818-6d08-4148-a5ca-bc557bdbd626] 2025-09-19 10:44:50.896055 | orchestrator | 10:44:50.895 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=a718e1b5-f1de-4bff-b778-86810aec1726] 2025-09-19 10:44:50.910150 | orchestrator | 10:44:50.909 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-19 10:44:50.930601 | orchestrator | 10:44:50.930 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-19 10:44:50.931114 | orchestrator | 10:44:50.931 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-19 10:44:50.934333 | orchestrator | 10:44:50.934 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-19 10:44:50.935608 | orchestrator | 10:44:50.935 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-19 10:44:50.946875 | orchestrator | 10:44:50.946 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-19 10:44:50.947952 | orchestrator | 10:44:50.947 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-19 10:44:52.229250 | orchestrator | 10:44:52.228 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=7f6af609-b61f-442f-a19f-0e04126cf182] 2025-09-19 10:44:52.236860 | orchestrator | 10:44:52.236 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-19 10:44:52.245187 | orchestrator | 10:44:52.245 STDOUT terraform: local_file.inventory: Creating... 2025-09-19 10:44:52.245621 | orchestrator | 10:44:52.245 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-19 10:44:52.554001 | orchestrator | 10:44:52.553 STDOUT terraform: local_file.inventory: Creation complete after 1s [id=4bcf207e678b815ff374f620debf4e54508c40cf] 2025-09-19 10:44:52.554782 | orchestrator | 10:44:52.554 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 1s [id=00acfe91eced26b5eb85c1f041122827a34a5e9a] 2025-09-19 10:44:53.103728 | orchestrator | 10:44:53.102 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=7f6af609-b61f-442f-a19f-0e04126cf182] 2025-09-19 10:45:00.931501 | orchestrator | 10:45:00.931 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-19 10:45:00.933670 | orchestrator | 10:45:00.933 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-19 10:45:00.934984 | orchestrator | 10:45:00.934 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-19 10:45:00.937342 | orchestrator | 10:45:00.937 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-19 10:45:00.952759 | orchestrator | 10:45:00.952 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-19 10:45:00.952896 | orchestrator | 10:45:00.952 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-19 10:45:10.933144 | orchestrator | 10:45:10.932 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-19 10:45:10.934057 | orchestrator | 10:45:10.933 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-19 10:45:10.935138 | orchestrator | 10:45:10.934 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-19 10:45:10.938377 | orchestrator | 10:45:10.938 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-19 10:45:10.954098 | orchestrator | 10:45:10.953 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-19 10:45:10.954195 | orchestrator | 10:45:10.953 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-19 10:45:11.594667 | orchestrator | 10:45:11.594 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=7d64b484-3fd7-4f8f-8725-1af8d36baf1f] 2025-09-19 10:45:11.604476 | orchestrator | 10:45:11.603 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=f94f245a-4c85-4709-a9ce-58adedfa6248] 2025-09-19 10:45:11.729124 | orchestrator | 10:45:11.728 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=4951e5fc-f0b1-447c-bd3a-e1b543d3827d] 2025-09-19 10:45:12.178531 | orchestrator | 10:45:12.178 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=5d5770fc-d2bc-4aa1-83c6-061678d96454] 2025-09-19 10:45:20.936935 | orchestrator | 10:45:20.936 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-19 10:45:20.954236 | orchestrator | 10:45:20.953 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-19 10:45:22.091429 | orchestrator | 10:45:22.091 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=17adc2c3-e9d5-4bce-ab68-8abe27974407] 2025-09-19 10:45:22.258117 | orchestrator | 10:45:22.257 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=aadf0f21-6158-4433-8984-5f2a597bf0cb] 2025-09-19 10:45:22.289652 | orchestrator | 10:45:22.289 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-19 10:45:22.290560 | orchestrator | 10:45:22.290 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-19 10:45:22.291509 | orchestrator | 10:45:22.291 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=8671858180072217487] 2025-09-19 10:45:22.294391 | orchestrator | 10:45:22.294 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-19 10:45:22.298704 | orchestrator | 10:45:22.298 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-19 10:45:22.300599 | orchestrator | 10:45:22.300 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-19 10:45:22.301216 | orchestrator | 10:45:22.301 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-19 10:45:22.304982 | orchestrator | 10:45:22.304 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-19 10:45:22.306875 | orchestrator | 10:45:22.306 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-19 10:45:22.307154 | orchestrator | 10:45:22.307 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-19 10:45:22.311090 | orchestrator | 10:45:22.310 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-19 10:45:22.335161 | orchestrator | 10:45:22.335 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-19 10:45:25.673691 | orchestrator | 10:45:25.673 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=aadf0f21-6158-4433-8984-5f2a597bf0cb/53ba9bad-d72e-4bb6-9573-8eecfdb7d8b6] 2025-09-19 10:45:25.709389 | orchestrator | 10:45:25.708 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=7d64b484-3fd7-4f8f-8725-1af8d36baf1f/3322ab10-28f2-47f3-9821-bfcea3cb9d1d] 2025-09-19 10:45:25.713300 | orchestrator | 10:45:25.712 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=5d5770fc-d2bc-4aa1-83c6-061678d96454/29dd875d-2efb-4f11-ac43-6353645f7e36] 2025-09-19 10:45:25.730313 | orchestrator | 10:45:25.729 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=aadf0f21-6158-4433-8984-5f2a597bf0cb/93b11a5e-f517-4b3c-9813-3ed2f0fa6238] 2025-09-19 10:45:25.743399 | orchestrator | 10:45:25.742 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=5d5770fc-d2bc-4aa1-83c6-061678d96454/02d4d70c-9632-40cc-9453-c0d53d6148ed] 2025-09-19 10:45:25.752530 | orchestrator | 10:45:25.751 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=7d64b484-3fd7-4f8f-8725-1af8d36baf1f/39dbe9ae-8bf0-4e12-9ca8-c59aebdbd1f7] 2025-09-19 10:45:31.798362 | orchestrator | 10:45:31.797 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=aadf0f21-6158-4433-8984-5f2a597bf0cb/adddc9ff-e41b-477e-a261-fe5fa77d3a0f] 2025-09-19 10:45:31.825973 | orchestrator | 10:45:31.825 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=7d64b484-3fd7-4f8f-8725-1af8d36baf1f/b4727c68-ff73-4ff9-aa8c-694157ecb2dd] 2025-09-19 10:45:31.838971 | orchestrator | 10:45:31.838 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=5d5770fc-d2bc-4aa1-83c6-061678d96454/14764732-c430-42d5-be90-4134a981fa59] 2025-09-19 10:45:32.336448 | orchestrator | 10:45:32.336 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-19 10:45:42.336720 | orchestrator | 10:45:42.336 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-19 10:45:43.233636 | orchestrator | 10:45:43.233 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=d35982e9-b312-4051-81b4-2fe06a2c1c7f] 2025-09-19 10:45:43.248506 | orchestrator | 10:45:43.248 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-19 10:45:43.248597 | orchestrator | 10:45:43.248 STDOUT terraform: Outputs: 2025-09-19 10:45:43.248625 | orchestrator | 10:45:43.248 STDOUT terraform: manager_address = 2025-09-19 10:45:43.248638 | orchestrator | 10:45:43.248 STDOUT terraform: private_key = 2025-09-19 10:45:43.409912 | orchestrator | ok: Runtime: 0:01:08.497843 2025-09-19 10:45:43.448473 | 2025-09-19 10:45:43.448609 | TASK [Create infrastructure (stable)] 2025-09-19 10:45:43.981340 | orchestrator | skipping: Conditional result was False 2025-09-19 10:45:43.999982 | 2025-09-19 10:45:44.000294 | TASK [Fetch manager address] 2025-09-19 10:45:44.492876 | orchestrator | ok 2025-09-19 10:45:44.502255 | 2025-09-19 10:45:44.502379 | TASK [Set manager_host address] 2025-09-19 10:45:44.581456 | orchestrator | ok 2025-09-19 10:45:44.591638 | 2025-09-19 10:45:44.591760 | LOOP [Update ansible collections] 2025-09-19 10:45:46.496750 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 10:45:46.497134 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-19 10:45:46.497254 | orchestrator | Starting galaxy collection install process 2025-09-19 10:45:46.497301 | orchestrator | Process install dependency map 2025-09-19 10:45:46.497339 | orchestrator | Starting collection install process 2025-09-19 10:45:46.497375 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-09-19 10:45:46.497416 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-09-19 10:45:46.497454 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-19 10:45:46.497530 | orchestrator | ok: Item: commons Runtime: 0:00:01.583048 2025-09-19 10:45:47.382079 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-19 10:45:47.382283 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 10:45:47.382339 | orchestrator | Starting galaxy collection install process 2025-09-19 10:45:47.382379 | orchestrator | Process install dependency map 2025-09-19 10:45:47.382417 | orchestrator | Starting collection install process 2025-09-19 10:45:47.382452 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-09-19 10:45:47.382487 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-09-19 10:45:47.382520 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-19 10:45:47.382574 | orchestrator | ok: Item: services Runtime: 0:00:00.617105 2025-09-19 10:45:47.406319 | 2025-09-19 10:45:47.406501 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-19 10:45:57.933307 | orchestrator | ok 2025-09-19 10:45:57.945048 | 2025-09-19 10:45:57.945173 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-19 10:46:57.984363 | orchestrator | ok 2025-09-19 10:46:57.992926 | 2025-09-19 10:46:57.993027 | TASK [Fetch manager ssh hostkey] 2025-09-19 10:46:59.560723 | orchestrator | Output suppressed because no_log was given 2025-09-19 10:46:59.575845 | 2025-09-19 10:46:59.576005 | TASK [Get ssh keypair from terraform environment] 2025-09-19 10:47:00.112342 | orchestrator | ok: Runtime: 0:00:00.008418 2025-09-19 10:47:00.128289 | 2025-09-19 10:47:00.128454 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-19 10:47:00.165914 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-19 10:47:00.175422 | 2025-09-19 10:47:00.175544 | TASK [Run manager part 0] 2025-09-19 10:47:01.429112 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 10:47:01.530148 | orchestrator | 2025-09-19 10:47:01.530184 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-19 10:47:01.530191 | orchestrator | 2025-09-19 10:47:01.530203 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-19 10:47:03.414132 | orchestrator | ok: [testbed-manager] 2025-09-19 10:47:03.414172 | orchestrator | 2025-09-19 10:47:03.414193 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-19 10:47:03.414202 | orchestrator | 2025-09-19 10:47:03.414210 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 10:47:05.421347 | orchestrator | ok: [testbed-manager] 2025-09-19 10:47:05.421409 | orchestrator | 2025-09-19 10:47:05.421419 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-19 10:47:06.049152 | orchestrator | ok: [testbed-manager] 2025-09-19 10:47:06.049190 | orchestrator | 2025-09-19 10:47:06.049198 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-19 10:47:06.092156 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:47:06.092188 | orchestrator | 2025-09-19 10:47:06.092197 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-19 10:47:06.117720 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:47:06.117753 | orchestrator | 2025-09-19 10:47:06.117759 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-19 10:47:06.142195 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:47:06.142268 | orchestrator | 2025-09-19 10:47:06.142289 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-19 10:47:06.171345 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:47:06.171390 | orchestrator | 2025-09-19 10:47:06.171400 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-19 10:47:06.199476 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:47:06.199518 | orchestrator | 2025-09-19 10:47:06.199527 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-19 10:47:06.238509 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:47:06.238568 | orchestrator | 2025-09-19 10:47:06.238587 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-19 10:47:06.264788 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:47:06.264860 | orchestrator | 2025-09-19 10:47:06.264871 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-19 10:47:07.007116 | orchestrator | changed: [testbed-manager] 2025-09-19 10:47:07.007155 | orchestrator | 2025-09-19 10:47:07.007161 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-19 10:49:48.719952 | orchestrator | changed: [testbed-manager] 2025-09-19 10:49:48.720023 | orchestrator | 2025-09-19 10:49:48.720042 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-19 10:51:12.169730 | orchestrator | changed: [testbed-manager] 2025-09-19 10:51:12.169811 | orchestrator | 2025-09-19 10:51:12.169823 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-19 10:51:37.375344 | orchestrator | changed: [testbed-manager] 2025-09-19 10:51:37.375435 | orchestrator | 2025-09-19 10:51:37.375455 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-19 10:51:46.602085 | orchestrator | changed: [testbed-manager] 2025-09-19 10:51:46.602181 | orchestrator | 2025-09-19 10:51:46.602197 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-19 10:51:46.652315 | orchestrator | ok: [testbed-manager] 2025-09-19 10:51:46.652399 | orchestrator | 2025-09-19 10:51:46.652415 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-19 10:51:47.464212 | orchestrator | ok: [testbed-manager] 2025-09-19 10:51:47.465079 | orchestrator | 2025-09-19 10:51:47.465268 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-19 10:51:48.164043 | orchestrator | changed: [testbed-manager] 2025-09-19 10:51:48.164099 | orchestrator | 2025-09-19 10:51:48.164107 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-19 10:51:54.618439 | orchestrator | changed: [testbed-manager] 2025-09-19 10:51:54.618510 | orchestrator | 2025-09-19 10:51:54.618549 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-19 10:52:00.669212 | orchestrator | changed: [testbed-manager] 2025-09-19 10:52:00.669292 | orchestrator | 2025-09-19 10:52:00.669310 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-19 10:52:03.225141 | orchestrator | changed: [testbed-manager] 2025-09-19 10:52:03.225275 | orchestrator | 2025-09-19 10:52:03.225292 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-19 10:52:05.028933 | orchestrator | changed: [testbed-manager] 2025-09-19 10:52:05.028971 | orchestrator | 2025-09-19 10:52:05.028978 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-19 10:52:06.139114 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-19 10:52:06.139232 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-19 10:52:06.139249 | orchestrator | 2025-09-19 10:52:06.139261 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-19 10:52:06.184256 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-19 10:52:06.184303 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-19 10:52:06.184309 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-19 10:52:06.184314 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-19 10:52:11.234866 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-19 10:52:11.234951 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-19 10:52:11.234963 | orchestrator | 2025-09-19 10:52:11.234974 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-19 10:52:11.812822 | orchestrator | changed: [testbed-manager] 2025-09-19 10:52:11.813073 | orchestrator | 2025-09-19 10:52:11.813094 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-19 10:57:30.513931 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-19 10:57:30.514077 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-19 10:57:30.514100 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-19 10:57:30.514113 | orchestrator | 2025-09-19 10:57:30.514127 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-19 10:57:32.848890 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-19 10:57:32.848998 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-19 10:57:32.849010 | orchestrator | 2025-09-19 10:57:32.849018 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-19 10:57:32.849027 | orchestrator | 2025-09-19 10:57:32.849035 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 10:57:34.267284 | orchestrator | ok: [testbed-manager] 2025-09-19 10:57:34.267370 | orchestrator | 2025-09-19 10:57:34.267385 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-19 10:57:34.315245 | orchestrator | ok: [testbed-manager] 2025-09-19 10:57:34.315310 | orchestrator | 2025-09-19 10:57:34.315325 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-19 10:57:34.398005 | orchestrator | ok: [testbed-manager] 2025-09-19 10:57:34.398085 | orchestrator | 2025-09-19 10:57:34.398091 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-19 10:57:35.137257 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:35.137345 | orchestrator | 2025-09-19 10:57:35.137360 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-19 10:57:35.903585 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:35.903713 | orchestrator | 2025-09-19 10:57:35.903730 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-19 10:57:37.302701 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-19 10:57:37.302789 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-19 10:57:37.302802 | orchestrator | 2025-09-19 10:57:37.302832 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-19 10:57:38.685852 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:38.685926 | orchestrator | 2025-09-19 10:57:38.685937 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-19 10:57:40.398604 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 10:57:40.398672 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-19 10:57:40.398680 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-19 10:57:40.398687 | orchestrator | 2025-09-19 10:57:40.398694 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-19 10:57:40.454526 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:57:40.454575 | orchestrator | 2025-09-19 10:57:40.454586 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-19 10:57:41.014211 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:41.014303 | orchestrator | 2025-09-19 10:57:41.014321 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-19 10:57:41.090914 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:57:41.091001 | orchestrator | 2025-09-19 10:57:41.091018 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-19 10:57:42.009277 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 10:57:42.009356 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:42.009369 | orchestrator | 2025-09-19 10:57:42.009379 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-19 10:57:42.048804 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:57:42.048881 | orchestrator | 2025-09-19 10:57:42.048895 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-19 10:57:42.080418 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:57:42.080497 | orchestrator | 2025-09-19 10:57:42.080512 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-19 10:57:42.113876 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:57:42.113952 | orchestrator | 2025-09-19 10:57:42.113967 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-19 10:57:42.162367 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:57:42.162454 | orchestrator | 2025-09-19 10:57:42.162471 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-19 10:57:42.901018 | orchestrator | ok: [testbed-manager] 2025-09-19 10:57:42.901098 | orchestrator | 2025-09-19 10:57:42.901111 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-19 10:57:42.901122 | orchestrator | 2025-09-19 10:57:42.901143 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 10:57:44.383808 | orchestrator | ok: [testbed-manager] 2025-09-19 10:57:44.383995 | orchestrator | 2025-09-19 10:57:44.384022 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-19 10:57:45.326482 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:45.326574 | orchestrator | 2025-09-19 10:57:45.326591 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 10:57:45.326630 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-19 10:57:45.326643 | orchestrator | 2025-09-19 10:57:45.589316 | orchestrator | ok: Runtime: 0:10:44.957578 2025-09-19 10:57:45.606782 | 2025-09-19 10:57:45.606946 | TASK [Point out that the log in on the manager is now possible] 2025-09-19 10:57:45.646574 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-19 10:57:45.656927 | 2025-09-19 10:57:45.657054 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-19 10:57:45.694480 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-19 10:57:45.704630 | 2025-09-19 10:57:45.704749 | TASK [Run manager part 1 + 2] 2025-09-19 10:57:46.522732 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 10:57:46.577477 | orchestrator | 2025-09-19 10:57:46.577526 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-19 10:57:46.577533 | orchestrator | 2025-09-19 10:57:46.577545 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 10:57:49.473302 | orchestrator | ok: [testbed-manager] 2025-09-19 10:57:49.473355 | orchestrator | 2025-09-19 10:57:49.473378 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-19 10:57:49.514660 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:57:49.514708 | orchestrator | 2025-09-19 10:57:49.514716 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-19 10:57:49.550082 | orchestrator | ok: [testbed-manager] 2025-09-19 10:57:49.550127 | orchestrator | 2025-09-19 10:57:49.550137 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 10:57:49.581145 | orchestrator | ok: [testbed-manager] 2025-09-19 10:57:49.581189 | orchestrator | 2025-09-19 10:57:49.581196 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 10:57:49.643915 | orchestrator | ok: [testbed-manager] 2025-09-19 10:57:49.643968 | orchestrator | 2025-09-19 10:57:49.643978 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 10:57:49.702919 | orchestrator | ok: [testbed-manager] 2025-09-19 10:57:49.702974 | orchestrator | 2025-09-19 10:57:49.702985 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 10:57:49.750983 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-19 10:57:49.751027 | orchestrator | 2025-09-19 10:57:49.751032 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 10:57:50.472125 | orchestrator | ok: [testbed-manager] 2025-09-19 10:57:50.472183 | orchestrator | 2025-09-19 10:57:50.472193 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 10:57:50.520777 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:57:50.520837 | orchestrator | 2025-09-19 10:57:50.520845 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 10:57:51.897684 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:51.897751 | orchestrator | 2025-09-19 10:57:51.897761 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 10:57:52.440670 | orchestrator | ok: [testbed-manager] 2025-09-19 10:57:52.440715 | orchestrator | 2025-09-19 10:57:52.440724 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 10:57:53.506667 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:53.506722 | orchestrator | 2025-09-19 10:57:53.506737 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 10:58:09.558954 | orchestrator | changed: [testbed-manager] 2025-09-19 10:58:09.559150 | orchestrator | 2025-09-19 10:58:09.559175 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-19 10:58:10.213273 | orchestrator | ok: [testbed-manager] 2025-09-19 10:58:10.213319 | orchestrator | 2025-09-19 10:58:10.213329 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-19 10:58:10.264131 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:58:10.264221 | orchestrator | 2025-09-19 10:58:10.264230 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-19 10:58:11.212688 | orchestrator | changed: [testbed-manager] 2025-09-19 10:58:11.212780 | orchestrator | 2025-09-19 10:58:11.212797 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-19 10:58:12.198927 | orchestrator | changed: [testbed-manager] 2025-09-19 10:58:12.199025 | orchestrator | 2025-09-19 10:58:12.199036 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-19 10:58:12.838244 | orchestrator | changed: [testbed-manager] 2025-09-19 10:58:12.838332 | orchestrator | 2025-09-19 10:58:12.838347 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-19 10:58:12.878654 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-19 10:58:12.878758 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-19 10:58:12.878775 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-19 10:58:12.878787 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-19 10:58:15.248340 | orchestrator | changed: [testbed-manager] 2025-09-19 10:58:15.248419 | orchestrator | 2025-09-19 10:58:15.248433 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-19 10:58:25.389656 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-19 10:58:25.389728 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-19 10:58:25.389738 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-19 10:58:25.389745 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-19 10:58:25.389759 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-19 10:58:25.389765 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-19 10:58:25.389772 | orchestrator | 2025-09-19 10:58:25.389779 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-19 10:58:26.472768 | orchestrator | changed: [testbed-manager] 2025-09-19 10:58:26.472809 | orchestrator | 2025-09-19 10:58:26.472818 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-19 10:58:26.519698 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:58:26.519739 | orchestrator | 2025-09-19 10:58:26.519748 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-19 10:58:29.889446 | orchestrator | changed: [testbed-manager] 2025-09-19 10:58:29.889481 | orchestrator | 2025-09-19 10:58:29.889486 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-19 10:58:29.922294 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:58:29.922326 | orchestrator | 2025-09-19 10:58:29.922332 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-19 11:00:05.997164 | orchestrator | changed: [testbed-manager] 2025-09-19 11:00:05.997269 | orchestrator | 2025-09-19 11:00:05.997290 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 11:00:07.018062 | orchestrator | ok: [testbed-manager] 2025-09-19 11:00:07.018092 | orchestrator | 2025-09-19 11:00:07.018099 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:00:07.018105 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-19 11:00:07.018110 | orchestrator | 2025-09-19 11:00:07.324729 | orchestrator | ok: Runtime: 0:02:21.080179 2025-09-19 11:00:07.342063 | 2025-09-19 11:00:07.342219 | TASK [Reboot manager] 2025-09-19 11:00:08.877749 | orchestrator | ok: Runtime: 0:00:00.881946 2025-09-19 11:00:08.895024 | 2025-09-19 11:00:08.895188 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-19 11:00:22.619585 | orchestrator | ok 2025-09-19 11:00:22.629941 | 2025-09-19 11:00:22.630060 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-19 11:01:22.677502 | orchestrator | ok 2025-09-19 11:01:22.687819 | 2025-09-19 11:01:22.687945 | TASK [Deploy manager + bootstrap nodes] 2025-09-19 11:01:25.412291 | orchestrator | 2025-09-19 11:01:25.412532 | orchestrator | # DEPLOY MANAGER 2025-09-19 11:01:25.412558 | orchestrator | 2025-09-19 11:01:25.412572 | orchestrator | + set -e 2025-09-19 11:01:25.412584 | orchestrator | + echo 2025-09-19 11:01:25.412597 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-19 11:01:25.412612 | orchestrator | + echo 2025-09-19 11:01:25.412657 | orchestrator | + cat /opt/manager-vars.sh 2025-09-19 11:01:25.415602 | orchestrator | export NUMBER_OF_NODES=6 2025-09-19 11:01:25.415663 | orchestrator | 2025-09-19 11:01:25.415673 | orchestrator | export CEPH_VERSION=reef 2025-09-19 11:01:25.415682 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-19 11:01:25.415690 | orchestrator | export MANAGER_VERSION=latest 2025-09-19 11:01:25.415709 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-19 11:01:25.415716 | orchestrator | 2025-09-19 11:01:25.415728 | orchestrator | export ARA=false 2025-09-19 11:01:25.415735 | orchestrator | export DEPLOY_MODE=manager 2025-09-19 11:01:25.415746 | orchestrator | export TEMPEST=false 2025-09-19 11:01:25.415753 | orchestrator | export IS_ZUUL=true 2025-09-19 11:01:25.415760 | orchestrator | 2025-09-19 11:01:25.415771 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.121 2025-09-19 11:01:25.415779 | orchestrator | export EXTERNAL_API=false 2025-09-19 11:01:25.415785 | orchestrator | 2025-09-19 11:01:25.415792 | orchestrator | export IMAGE_USER=ubuntu 2025-09-19 11:01:25.415801 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-19 11:01:25.415808 | orchestrator | 2025-09-19 11:01:25.415814 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-19 11:01:25.415828 | orchestrator | 2025-09-19 11:01:25.415835 | orchestrator | + echo 2025-09-19 11:01:25.415843 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 11:01:25.416815 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 11:01:25.416828 | orchestrator | ++ INTERACTIVE=false 2025-09-19 11:01:25.416836 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 11:01:25.416844 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 11:01:25.416855 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 11:01:25.416863 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 11:01:25.416871 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 11:01:25.416879 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 11:01:25.416885 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 11:01:25.416953 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 11:01:25.416963 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 11:01:25.416970 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 11:01:25.416977 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 11:01:25.416985 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 11:01:25.417004 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 11:01:25.417012 | orchestrator | ++ export ARA=false 2025-09-19 11:01:25.417019 | orchestrator | ++ ARA=false 2025-09-19 11:01:25.417026 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 11:01:25.417032 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 11:01:25.417039 | orchestrator | ++ export TEMPEST=false 2025-09-19 11:01:25.417050 | orchestrator | ++ TEMPEST=false 2025-09-19 11:01:25.417057 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 11:01:25.417064 | orchestrator | ++ IS_ZUUL=true 2025-09-19 11:01:25.417070 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.121 2025-09-19 11:01:25.417077 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.121 2025-09-19 11:01:25.417084 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 11:01:25.417091 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 11:01:25.417098 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 11:01:25.417105 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 11:01:25.417114 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 11:01:25.417121 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 11:01:25.417128 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 11:01:25.417135 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 11:01:25.417142 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-19 11:01:25.477555 | orchestrator | + docker version 2025-09-19 11:01:25.772153 | orchestrator | Client: Docker Engine - Community 2025-09-19 11:01:25.772232 | orchestrator | Version: 27.5.1 2025-09-19 11:01:25.772241 | orchestrator | API version: 1.47 2025-09-19 11:01:25.772247 | orchestrator | Go version: go1.22.11 2025-09-19 11:01:25.772251 | orchestrator | Git commit: 9f9e405 2025-09-19 11:01:25.772256 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-19 11:01:25.772263 | orchestrator | OS/Arch: linux/amd64 2025-09-19 11:01:25.772267 | orchestrator | Context: default 2025-09-19 11:01:25.772272 | orchestrator | 2025-09-19 11:01:25.772277 | orchestrator | Server: Docker Engine - Community 2025-09-19 11:01:25.772282 | orchestrator | Engine: 2025-09-19 11:01:25.772287 | orchestrator | Version: 27.5.1 2025-09-19 11:01:25.772292 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-19 11:01:25.772318 | orchestrator | Go version: go1.22.11 2025-09-19 11:01:25.772323 | orchestrator | Git commit: 4c9b3b0 2025-09-19 11:01:25.772328 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-19 11:01:25.772333 | orchestrator | OS/Arch: linux/amd64 2025-09-19 11:01:25.772337 | orchestrator | Experimental: false 2025-09-19 11:01:25.772342 | orchestrator | containerd: 2025-09-19 11:01:25.772358 | orchestrator | Version: 1.7.27 2025-09-19 11:01:25.772363 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-19 11:01:25.772368 | orchestrator | runc: 2025-09-19 11:01:25.772373 | orchestrator | Version: 1.2.5 2025-09-19 11:01:25.772377 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-19 11:01:25.772382 | orchestrator | docker-init: 2025-09-19 11:01:25.772387 | orchestrator | Version: 0.19.0 2025-09-19 11:01:25.772392 | orchestrator | GitCommit: de40ad0 2025-09-19 11:01:25.775018 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-19 11:01:25.786194 | orchestrator | + set -e 2025-09-19 11:01:25.786215 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 11:01:25.786221 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 11:01:25.786226 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 11:01:25.786231 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 11:01:25.786237 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 11:01:25.786242 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 11:01:25.786247 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 11:01:25.786252 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 11:01:25.786257 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 11:01:25.786262 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 11:01:25.786267 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 11:01:25.786272 | orchestrator | ++ export ARA=false 2025-09-19 11:01:25.786278 | orchestrator | ++ ARA=false 2025-09-19 11:01:25.786283 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 11:01:25.786287 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 11:01:25.786292 | orchestrator | ++ export TEMPEST=false 2025-09-19 11:01:25.786298 | orchestrator | ++ TEMPEST=false 2025-09-19 11:01:25.786303 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 11:01:25.786307 | orchestrator | ++ IS_ZUUL=true 2025-09-19 11:01:25.786312 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.121 2025-09-19 11:01:25.786318 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.121 2025-09-19 11:01:25.786323 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 11:01:25.786328 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 11:01:25.786333 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 11:01:25.786337 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 11:01:25.786343 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 11:01:25.786363 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 11:01:25.786368 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 11:01:25.786378 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 11:01:25.786384 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 11:01:25.786389 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 11:01:25.786394 | orchestrator | ++ INTERACTIVE=false 2025-09-19 11:01:25.786399 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 11:01:25.786407 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 11:01:25.786420 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 11:01:25.786425 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-19 11:01:25.786430 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-19 11:01:25.791825 | orchestrator | + set -e 2025-09-19 11:01:25.791841 | orchestrator | + VERSION=reef 2025-09-19 11:01:25.793260 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-19 11:01:25.800526 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-19 11:01:25.800558 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-19 11:01:25.807621 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-19 11:01:25.814302 | orchestrator | + set -e 2025-09-19 11:01:25.814399 | orchestrator | + VERSION=2024.2 2025-09-19 11:01:25.815609 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-19 11:01:25.819663 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-19 11:01:25.819706 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-19 11:01:25.825984 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-19 11:01:25.826893 | orchestrator | ++ semver latest 7.0.0 2025-09-19 11:01:25.891195 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-19 11:01:25.891300 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-19 11:01:25.891322 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-19 11:01:25.891335 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-19 11:01:25.988898 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 11:01:25.992096 | orchestrator | + source /opt/venv/bin/activate 2025-09-19 11:01:25.993016 | orchestrator | ++ deactivate nondestructive 2025-09-19 11:01:25.993048 | orchestrator | ++ '[' -n '' ']' 2025-09-19 11:01:25.993061 | orchestrator | ++ '[' -n '' ']' 2025-09-19 11:01:25.993073 | orchestrator | ++ hash -r 2025-09-19 11:01:25.993090 | orchestrator | ++ '[' -n '' ']' 2025-09-19 11:01:25.993101 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-19 11:01:25.993112 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-19 11:01:25.993123 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-19 11:01:25.993135 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-19 11:01:25.993157 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-19 11:01:25.993174 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-19 11:01:25.993185 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-19 11:01:25.993201 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 11:01:25.993212 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 11:01:25.993323 | orchestrator | ++ export PATH 2025-09-19 11:01:25.993339 | orchestrator | ++ '[' -n '' ']' 2025-09-19 11:01:25.993388 | orchestrator | ++ '[' -z '' ']' 2025-09-19 11:01:25.993408 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-19 11:01:25.993425 | orchestrator | ++ PS1='(venv) ' 2025-09-19 11:01:25.993441 | orchestrator | ++ export PS1 2025-09-19 11:01:25.993467 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-19 11:01:25.993489 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-19 11:01:25.993507 | orchestrator | ++ hash -r 2025-09-19 11:01:25.993547 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-19 11:01:27.389910 | orchestrator | 2025-09-19 11:01:27.390122 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-19 11:01:27.390143 | orchestrator | 2025-09-19 11:01:27.390155 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 11:01:27.963873 | orchestrator | ok: [testbed-manager] 2025-09-19 11:01:27.963977 | orchestrator | 2025-09-19 11:01:27.963993 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-19 11:01:29.011063 | orchestrator | changed: [testbed-manager] 2025-09-19 11:01:29.011167 | orchestrator | 2025-09-19 11:01:29.011183 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-19 11:01:29.011196 | orchestrator | 2025-09-19 11:01:29.011207 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 11:01:31.390923 | orchestrator | ok: [testbed-manager] 2025-09-19 11:01:31.391015 | orchestrator | 2025-09-19 11:01:31.391024 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-19 11:01:31.447608 | orchestrator | ok: [testbed-manager] 2025-09-19 11:01:31.447728 | orchestrator | 2025-09-19 11:01:31.447751 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-19 11:01:31.919626 | orchestrator | changed: [testbed-manager] 2025-09-19 11:01:31.919733 | orchestrator | 2025-09-19 11:01:31.919748 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-19 11:01:31.961918 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:01:31.962149 | orchestrator | 2025-09-19 11:01:31.962172 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-19 11:01:32.299047 | orchestrator | changed: [testbed-manager] 2025-09-19 11:01:32.299150 | orchestrator | 2025-09-19 11:01:32.299165 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-19 11:01:32.346803 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:01:32.346895 | orchestrator | 2025-09-19 11:01:32.346909 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-19 11:01:32.688964 | orchestrator | ok: [testbed-manager] 2025-09-19 11:01:32.689153 | orchestrator | 2025-09-19 11:01:32.690050 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-19 11:01:32.808025 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:01:32.808121 | orchestrator | 2025-09-19 11:01:32.808134 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-19 11:01:32.808145 | orchestrator | 2025-09-19 11:01:32.808158 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 11:01:34.541622 | orchestrator | ok: [testbed-manager] 2025-09-19 11:01:34.541718 | orchestrator | 2025-09-19 11:01:34.541735 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-19 11:01:34.646216 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-19 11:01:34.646310 | orchestrator | 2025-09-19 11:01:34.646388 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-19 11:01:34.706252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-19 11:01:34.706365 | orchestrator | 2025-09-19 11:01:34.706380 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-19 11:01:35.926253 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-19 11:01:35.926417 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-19 11:01:35.926435 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-19 11:01:35.926447 | orchestrator | 2025-09-19 11:01:35.926460 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-19 11:01:37.748107 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-19 11:01:37.748231 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-19 11:01:37.748250 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-19 11:01:37.748262 | orchestrator | 2025-09-19 11:01:37.748275 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-19 11:01:38.403677 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 11:01:38.403768 | orchestrator | changed: [testbed-manager] 2025-09-19 11:01:38.403782 | orchestrator | 2025-09-19 11:01:38.403793 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-19 11:01:39.087690 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 11:01:39.087764 | orchestrator | changed: [testbed-manager] 2025-09-19 11:01:39.087771 | orchestrator | 2025-09-19 11:01:39.087776 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-19 11:01:39.126393 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:01:39.126469 | orchestrator | 2025-09-19 11:01:39.126483 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-19 11:01:39.470576 | orchestrator | ok: [testbed-manager] 2025-09-19 11:01:39.470681 | orchestrator | 2025-09-19 11:01:39.470698 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-19 11:01:39.548296 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-19 11:01:39.548465 | orchestrator | 2025-09-19 11:01:39.548496 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-19 11:01:40.579119 | orchestrator | changed: [testbed-manager] 2025-09-19 11:01:40.579218 | orchestrator | 2025-09-19 11:01:40.579234 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-19 11:01:41.396623 | orchestrator | changed: [testbed-manager] 2025-09-19 11:01:41.396721 | orchestrator | 2025-09-19 11:01:41.396737 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-19 11:01:54.706509 | orchestrator | changed: [testbed-manager] 2025-09-19 11:01:54.706604 | orchestrator | 2025-09-19 11:01:54.706616 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-19 11:01:54.769308 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:01:54.769434 | orchestrator | 2025-09-19 11:01:54.769450 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-19 11:01:54.769462 | orchestrator | 2025-09-19 11:01:54.769474 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 11:01:57.613857 | orchestrator | ok: [testbed-manager] 2025-09-19 11:01:57.613964 | orchestrator | 2025-09-19 11:01:57.614008 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-19 11:01:57.727387 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-19 11:01:57.727484 | orchestrator | 2025-09-19 11:01:57.727498 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-19 11:01:57.780563 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 11:01:57.780646 | orchestrator | 2025-09-19 11:01:57.780659 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-19 11:02:00.345156 | orchestrator | ok: [testbed-manager] 2025-09-19 11:02:00.345280 | orchestrator | 2025-09-19 11:02:00.345297 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-19 11:02:00.400524 | orchestrator | ok: [testbed-manager] 2025-09-19 11:02:00.400604 | orchestrator | 2025-09-19 11:02:00.400620 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-19 11:02:00.520499 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-19 11:02:00.520597 | orchestrator | 2025-09-19 11:02:00.520612 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-19 11:02:03.469215 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-19 11:02:03.469369 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-19 11:02:03.469386 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-19 11:02:03.469398 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-19 11:02:03.469410 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-19 11:02:03.469420 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-19 11:02:03.469431 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-19 11:02:03.469442 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-19 11:02:03.469453 | orchestrator | 2025-09-19 11:02:03.469466 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-19 11:02:04.124790 | orchestrator | changed: [testbed-manager] 2025-09-19 11:02:04.124892 | orchestrator | 2025-09-19 11:02:04.124906 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-19 11:02:04.750891 | orchestrator | changed: [testbed-manager] 2025-09-19 11:02:04.750979 | orchestrator | 2025-09-19 11:02:04.750992 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-19 11:02:04.831965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-19 11:02:04.832083 | orchestrator | 2025-09-19 11:02:04.832106 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-19 11:02:06.061949 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-19 11:02:06.062111 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-19 11:02:06.062131 | orchestrator | 2025-09-19 11:02:06.062140 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-19 11:02:06.664491 | orchestrator | changed: [testbed-manager] 2025-09-19 11:02:06.664586 | orchestrator | 2025-09-19 11:02:06.664599 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-19 11:02:06.721289 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:02:06.721408 | orchestrator | 2025-09-19 11:02:06.721423 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-19 11:02:06.791446 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-19 11:02:06.791538 | orchestrator | 2025-09-19 11:02:06.791552 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-19 11:02:07.419720 | orchestrator | changed: [testbed-manager] 2025-09-19 11:02:07.419819 | orchestrator | 2025-09-19 11:02:07.419834 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-19 11:02:07.480610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-19 11:02:07.480743 | orchestrator | 2025-09-19 11:02:07.480759 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-19 11:02:08.869075 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 11:02:08.869163 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 11:02:08.869174 | orchestrator | changed: [testbed-manager] 2025-09-19 11:02:08.869183 | orchestrator | 2025-09-19 11:02:08.869192 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-19 11:02:09.500600 | orchestrator | changed: [testbed-manager] 2025-09-19 11:02:09.500697 | orchestrator | 2025-09-19 11:02:09.500714 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-19 11:02:09.549505 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:02:09.549573 | orchestrator | 2025-09-19 11:02:09.549587 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-19 11:02:09.654890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-19 11:02:09.654976 | orchestrator | 2025-09-19 11:02:09.654990 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-19 11:02:10.192852 | orchestrator | changed: [testbed-manager] 2025-09-19 11:02:10.192949 | orchestrator | 2025-09-19 11:02:10.192965 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-19 11:02:10.596031 | orchestrator | changed: [testbed-manager] 2025-09-19 11:02:10.596130 | orchestrator | 2025-09-19 11:02:10.596147 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-19 11:02:11.861730 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-19 11:02:11.861840 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-19 11:02:11.861855 | orchestrator | 2025-09-19 11:02:11.861868 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-19 11:02:12.506942 | orchestrator | changed: [testbed-manager] 2025-09-19 11:02:12.507044 | orchestrator | 2025-09-19 11:02:12.507060 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-19 11:02:12.923186 | orchestrator | ok: [testbed-manager] 2025-09-19 11:02:12.923293 | orchestrator | 2025-09-19 11:02:12.923344 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-19 11:02:13.296848 | orchestrator | changed: [testbed-manager] 2025-09-19 11:02:13.296947 | orchestrator | 2025-09-19 11:02:13.296962 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-19 11:02:13.344697 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:02:13.344751 | orchestrator | 2025-09-19 11:02:13.344766 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-19 11:02:13.421924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-19 11:02:13.422080 | orchestrator | 2025-09-19 11:02:13.422097 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-19 11:02:13.460896 | orchestrator | ok: [testbed-manager] 2025-09-19 11:02:13.460970 | orchestrator | 2025-09-19 11:02:13.460984 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-19 11:02:15.512424 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-19 11:02:15.512555 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-19 11:02:15.512580 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-19 11:02:15.512603 | orchestrator | 2025-09-19 11:02:15.512625 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-19 11:02:16.227062 | orchestrator | changed: [testbed-manager] 2025-09-19 11:02:16.227167 | orchestrator | 2025-09-19 11:02:16.227186 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-19 11:02:16.941419 | orchestrator | changed: [testbed-manager] 2025-09-19 11:02:16.941524 | orchestrator | 2025-09-19 11:02:16.941537 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-19 11:02:17.692918 | orchestrator | changed: [testbed-manager] 2025-09-19 11:02:17.693023 | orchestrator | 2025-09-19 11:02:17.693039 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-19 11:02:17.771636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-19 11:02:17.771720 | orchestrator | 2025-09-19 11:02:17.771734 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-19 11:02:17.815257 | orchestrator | ok: [testbed-manager] 2025-09-19 11:02:17.815377 | orchestrator | 2025-09-19 11:02:17.815391 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-19 11:02:18.512007 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-19 11:02:18.512110 | orchestrator | 2025-09-19 11:02:18.512126 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-19 11:02:18.588923 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-19 11:02:18.589011 | orchestrator | 2025-09-19 11:02:18.589024 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-19 11:02:19.302417 | orchestrator | changed: [testbed-manager] 2025-09-19 11:02:19.302490 | orchestrator | 2025-09-19 11:02:19.302498 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-19 11:02:19.897956 | orchestrator | ok: [testbed-manager] 2025-09-19 11:02:19.898080 | orchestrator | 2025-09-19 11:02:19.898092 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-19 11:02:19.956976 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:02:19.957080 | orchestrator | 2025-09-19 11:02:19.957105 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-19 11:02:20.003848 | orchestrator | ok: [testbed-manager] 2025-09-19 11:02:20.003923 | orchestrator | 2025-09-19 11:02:20.003931 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-19 11:02:20.846537 | orchestrator | changed: [testbed-manager] 2025-09-19 11:02:20.846628 | orchestrator | 2025-09-19 11:02:20.846643 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-19 11:03:27.468561 | orchestrator | changed: [testbed-manager] 2025-09-19 11:03:27.468677 | orchestrator | 2025-09-19 11:03:27.468695 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-19 11:03:28.459043 | orchestrator | ok: [testbed-manager] 2025-09-19 11:03:28.459139 | orchestrator | 2025-09-19 11:03:28.459154 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-19 11:03:28.519800 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:03:28.519890 | orchestrator | 2025-09-19 11:03:28.519907 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-19 11:03:31.087300 | orchestrator | changed: [testbed-manager] 2025-09-19 11:03:31.087406 | orchestrator | 2025-09-19 11:03:31.087423 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-19 11:03:31.162232 | orchestrator | ok: [testbed-manager] 2025-09-19 11:03:31.162287 | orchestrator | 2025-09-19 11:03:31.162300 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-19 11:03:31.162311 | orchestrator | 2025-09-19 11:03:31.162323 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-19 11:03:31.208664 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:03:31.208696 | orchestrator | 2025-09-19 11:03:31.208708 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-19 11:04:31.260866 | orchestrator | Pausing for 60 seconds 2025-09-19 11:04:31.261005 | orchestrator | changed: [testbed-manager] 2025-09-19 11:04:31.261022 | orchestrator | 2025-09-19 11:04:31.261036 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-19 11:04:36.359390 | orchestrator | changed: [testbed-manager] 2025-09-19 11:04:36.359490 | orchestrator | 2025-09-19 11:04:36.359508 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-19 11:05:17.963363 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-19 11:05:17.963480 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-19 11:05:17.963496 | orchestrator | changed: [testbed-manager] 2025-09-19 11:05:17.963535 | orchestrator | 2025-09-19 11:05:17.963548 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-19 11:05:27.872838 | orchestrator | changed: [testbed-manager] 2025-09-19 11:05:27.872947 | orchestrator | 2025-09-19 11:05:27.872963 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-19 11:05:27.958468 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-19 11:05:27.958548 | orchestrator | 2025-09-19 11:05:27.958557 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-19 11:05:27.958563 | orchestrator | 2025-09-19 11:05:27.958575 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-19 11:05:28.013805 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:05:28.013905 | orchestrator | 2025-09-19 11:05:28.013920 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:05:28.013934 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-19 11:05:28.013945 | orchestrator | 2025-09-19 11:05:28.122493 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 11:05:28.122588 | orchestrator | + deactivate 2025-09-19 11:05:28.122603 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-19 11:05:28.122616 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 11:05:28.122628 | orchestrator | + export PATH 2025-09-19 11:05:28.122639 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-19 11:05:28.122650 | orchestrator | + '[' -n '' ']' 2025-09-19 11:05:28.122661 | orchestrator | + hash -r 2025-09-19 11:05:28.122726 | orchestrator | + '[' -n '' ']' 2025-09-19 11:05:28.122741 | orchestrator | + unset VIRTUAL_ENV 2025-09-19 11:05:28.122752 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-19 11:05:28.122763 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-19 11:05:28.122785 | orchestrator | + unset -f deactivate 2025-09-19 11:05:28.122797 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-19 11:05:28.132109 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-19 11:05:28.132194 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-19 11:05:28.132209 | orchestrator | + local max_attempts=60 2025-09-19 11:05:28.132221 | orchestrator | + local name=ceph-ansible 2025-09-19 11:05:28.132232 | orchestrator | + local attempt_num=1 2025-09-19 11:05:28.132576 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:05:28.171829 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:05:28.171915 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-19 11:05:28.171928 | orchestrator | + local max_attempts=60 2025-09-19 11:05:28.171939 | orchestrator | + local name=kolla-ansible 2025-09-19 11:05:28.171949 | orchestrator | + local attempt_num=1 2025-09-19 11:05:28.172671 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-19 11:05:28.214996 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:05:28.215150 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-19 11:05:28.215168 | orchestrator | + local max_attempts=60 2025-09-19 11:05:28.215182 | orchestrator | + local name=osism-ansible 2025-09-19 11:05:28.215206 | orchestrator | + local attempt_num=1 2025-09-19 11:05:28.215622 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-19 11:05:28.255776 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:05:28.255868 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-19 11:05:28.255892 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-19 11:05:28.967320 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-19 11:05:29.175577 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-19 11:05:29.175651 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-19 11:05:29.175659 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-19 11:05:29.175681 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-19 11:05:29.175688 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-19 11:05:29.175700 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-19 11:05:29.175705 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-19 11:05:29.175710 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-09-19 11:05:29.175715 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-19 11:05:29.175719 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-19 11:05:29.175724 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-19 11:05:29.175729 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-19 11:05:29.175733 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-19 11:05:29.175738 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-19 11:05:29.175743 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-19 11:05:29.175747 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-19 11:05:29.182464 | orchestrator | ++ semver latest 7.0.0 2025-09-19 11:05:29.228921 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-19 11:05:29.228993 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-19 11:05:29.229003 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-19 11:05:29.231938 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-19 11:05:41.404551 | orchestrator | 2025-09-19 11:05:41 | INFO  | Task 261d3e5e-0663-46cd-9af4-549f1d53a235 (resolvconf) was prepared for execution. 2025-09-19 11:05:41.404658 | orchestrator | 2025-09-19 11:05:41 | INFO  | It takes a moment until task 261d3e5e-0663-46cd-9af4-549f1d53a235 (resolvconf) has been started and output is visible here. 2025-09-19 11:05:55.202918 | orchestrator | 2025-09-19 11:05:55.203065 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-19 11:05:55.203083 | orchestrator | 2025-09-19 11:05:55.203095 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 11:05:55.203132 | orchestrator | Friday 19 September 2025 11:05:45 +0000 (0:00:00.149) 0:00:00.149 ****** 2025-09-19 11:05:55.203145 | orchestrator | ok: [testbed-manager] 2025-09-19 11:05:55.203157 | orchestrator | 2025-09-19 11:05:55.203168 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-19 11:05:55.203180 | orchestrator | Friday 19 September 2025 11:05:49 +0000 (0:00:03.950) 0:00:04.099 ****** 2025-09-19 11:05:55.203190 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:05:55.203202 | orchestrator | 2025-09-19 11:05:55.203212 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-19 11:05:55.203223 | orchestrator | Friday 19 September 2025 11:05:49 +0000 (0:00:00.056) 0:00:04.155 ****** 2025-09-19 11:05:55.203234 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-19 11:05:55.203245 | orchestrator | 2025-09-19 11:05:55.203256 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-19 11:05:55.203266 | orchestrator | Friday 19 September 2025 11:05:49 +0000 (0:00:00.088) 0:00:04.244 ****** 2025-09-19 11:05:55.203277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 11:05:55.203288 | orchestrator | 2025-09-19 11:05:55.203299 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-19 11:05:55.203309 | orchestrator | Friday 19 September 2025 11:05:49 +0000 (0:00:00.066) 0:00:04.310 ****** 2025-09-19 11:05:55.203320 | orchestrator | ok: [testbed-manager] 2025-09-19 11:05:55.203330 | orchestrator | 2025-09-19 11:05:55.203341 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-19 11:05:55.203351 | orchestrator | Friday 19 September 2025 11:05:50 +0000 (0:00:01.114) 0:00:05.425 ****** 2025-09-19 11:05:55.203362 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:05:55.203372 | orchestrator | 2025-09-19 11:05:55.203383 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-19 11:05:55.203393 | orchestrator | Friday 19 September 2025 11:05:50 +0000 (0:00:00.068) 0:00:05.494 ****** 2025-09-19 11:05:55.203404 | orchestrator | ok: [testbed-manager] 2025-09-19 11:05:55.203414 | orchestrator | 2025-09-19 11:05:55.203425 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-19 11:05:55.203438 | orchestrator | Friday 19 September 2025 11:05:51 +0000 (0:00:00.472) 0:00:05.967 ****** 2025-09-19 11:05:55.203450 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:05:55.203463 | orchestrator | 2025-09-19 11:05:55.203476 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-19 11:05:55.203489 | orchestrator | Friday 19 September 2025 11:05:51 +0000 (0:00:00.079) 0:00:06.046 ****** 2025-09-19 11:05:55.203501 | orchestrator | changed: [testbed-manager] 2025-09-19 11:05:55.203515 | orchestrator | 2025-09-19 11:05:55.203527 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-19 11:05:55.203540 | orchestrator | Friday 19 September 2025 11:05:51 +0000 (0:00:00.541) 0:00:06.588 ****** 2025-09-19 11:05:55.203552 | orchestrator | changed: [testbed-manager] 2025-09-19 11:05:55.203564 | orchestrator | 2025-09-19 11:05:55.203575 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-19 11:05:55.203585 | orchestrator | Friday 19 September 2025 11:05:52 +0000 (0:00:01.066) 0:00:07.654 ****** 2025-09-19 11:05:55.203596 | orchestrator | ok: [testbed-manager] 2025-09-19 11:05:55.203606 | orchestrator | 2025-09-19 11:05:55.203617 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-19 11:05:55.203627 | orchestrator | Friday 19 September 2025 11:05:53 +0000 (0:00:00.933) 0:00:08.587 ****** 2025-09-19 11:05:55.203648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-19 11:05:55.203667 | orchestrator | 2025-09-19 11:05:55.203678 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-19 11:05:55.203688 | orchestrator | Friday 19 September 2025 11:05:53 +0000 (0:00:00.092) 0:00:08.680 ****** 2025-09-19 11:05:55.203699 | orchestrator | changed: [testbed-manager] 2025-09-19 11:05:55.203709 | orchestrator | 2025-09-19 11:05:55.203720 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:05:55.203731 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 11:05:55.203742 | orchestrator | 2025-09-19 11:05:55.203753 | orchestrator | 2025-09-19 11:05:55.203764 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:05:55.203774 | orchestrator | Friday 19 September 2025 11:05:54 +0000 (0:00:01.127) 0:00:09.807 ****** 2025-09-19 11:05:55.203785 | orchestrator | =============================================================================== 2025-09-19 11:05:55.203795 | orchestrator | Gathering Facts --------------------------------------------------------- 3.95s 2025-09-19 11:05:55.203806 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.13s 2025-09-19 11:05:55.203816 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.11s 2025-09-19 11:05:55.203827 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.07s 2025-09-19 11:05:55.203837 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.93s 2025-09-19 11:05:55.203848 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2025-09-19 11:05:55.203874 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.47s 2025-09-19 11:05:55.203885 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-09-19 11:05:55.203896 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-09-19 11:05:55.203907 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-09-19 11:05:55.203917 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-09-19 11:05:55.203928 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-09-19 11:05:55.203938 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-09-19 11:05:55.510420 | orchestrator | + osism apply sshconfig 2025-09-19 11:06:07.515821 | orchestrator | 2025-09-19 11:06:07 | INFO  | Task 01f00f70-dec2-45f5-a9b6-db2f12da703a (sshconfig) was prepared for execution. 2025-09-19 11:06:07.515971 | orchestrator | 2025-09-19 11:06:07 | INFO  | It takes a moment until task 01f00f70-dec2-45f5-a9b6-db2f12da703a (sshconfig) has been started and output is visible here. 2025-09-19 11:06:19.337808 | orchestrator | 2025-09-19 11:06:19.337972 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-19 11:06:19.338115 | orchestrator | 2025-09-19 11:06:19.338129 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-19 11:06:19.338141 | orchestrator | Friday 19 September 2025 11:06:11 +0000 (0:00:00.168) 0:00:00.168 ****** 2025-09-19 11:06:19.338152 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:19.338163 | orchestrator | 2025-09-19 11:06:19.338174 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-19 11:06:19.338185 | orchestrator | Friday 19 September 2025 11:06:12 +0000 (0:00:00.628) 0:00:00.797 ****** 2025-09-19 11:06:19.338196 | orchestrator | changed: [testbed-manager] 2025-09-19 11:06:19.338208 | orchestrator | 2025-09-19 11:06:19.338219 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-19 11:06:19.338231 | orchestrator | Friday 19 September 2025 11:06:12 +0000 (0:00:00.544) 0:00:01.341 ****** 2025-09-19 11:06:19.338242 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-19 11:06:19.338253 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-19 11:06:19.338291 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-19 11:06:19.338303 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-19 11:06:19.338313 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-19 11:06:19.338340 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-19 11:06:19.338351 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-19 11:06:19.338365 | orchestrator | 2025-09-19 11:06:19.338378 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-19 11:06:19.338391 | orchestrator | Friday 19 September 2025 11:06:18 +0000 (0:00:05.732) 0:00:07.074 ****** 2025-09-19 11:06:19.338403 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:06:19.338415 | orchestrator | 2025-09-19 11:06:19.338427 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-19 11:06:19.338439 | orchestrator | Friday 19 September 2025 11:06:18 +0000 (0:00:00.059) 0:00:07.134 ****** 2025-09-19 11:06:19.338452 | orchestrator | changed: [testbed-manager] 2025-09-19 11:06:19.338463 | orchestrator | 2025-09-19 11:06:19.338476 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:06:19.338489 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:06:19.338502 | orchestrator | 2025-09-19 11:06:19.338514 | orchestrator | 2025-09-19 11:06:19.338527 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:06:19.338539 | orchestrator | Friday 19 September 2025 11:06:19 +0000 (0:00:00.605) 0:00:07.740 ****** 2025-09-19 11:06:19.338550 | orchestrator | =============================================================================== 2025-09-19 11:06:19.338560 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.73s 2025-09-19 11:06:19.338571 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.63s 2025-09-19 11:06:19.338581 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2025-09-19 11:06:19.338592 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.54s 2025-09-19 11:06:19.338603 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-09-19 11:06:19.610636 | orchestrator | + osism apply known-hosts 2025-09-19 11:06:31.664608 | orchestrator | 2025-09-19 11:06:31 | INFO  | Task 3fbfb15e-44ee-4c0c-9f1b-6a31dc4a1ade (known-hosts) was prepared for execution. 2025-09-19 11:06:31.664741 | orchestrator | 2025-09-19 11:06:31 | INFO  | It takes a moment until task 3fbfb15e-44ee-4c0c-9f1b-6a31dc4a1ade (known-hosts) has been started and output is visible here. 2025-09-19 11:06:48.324272 | orchestrator | 2025-09-19 11:06:48.324386 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-19 11:06:48.324404 | orchestrator | 2025-09-19 11:06:48.324416 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-19 11:06:48.324428 | orchestrator | Friday 19 September 2025 11:06:35 +0000 (0:00:00.194) 0:00:00.194 ****** 2025-09-19 11:06:48.324439 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-19 11:06:48.324451 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-19 11:06:48.324462 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-19 11:06:48.324473 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-19 11:06:48.324483 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-19 11:06:48.324494 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-19 11:06:48.324504 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-19 11:06:48.324515 | orchestrator | 2025-09-19 11:06:48.324526 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-19 11:06:48.324538 | orchestrator | Friday 19 September 2025 11:06:41 +0000 (0:00:05.906) 0:00:06.101 ****** 2025-09-19 11:06:48.324572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-19 11:06:48.324585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-19 11:06:48.324596 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-19 11:06:48.324606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-19 11:06:48.324617 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-19 11:06:48.324638 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-19 11:06:48.324649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-19 11:06:48.324660 | orchestrator | 2025-09-19 11:06:48.324671 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:06:48.324682 | orchestrator | Friday 19 September 2025 11:06:41 +0000 (0:00:00.163) 0:00:06.264 ****** 2025-09-19 11:06:48.324696 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbUoI60Kc0UDzX6achPD6/yq4jPtpGaINENt9ba87I3F7RM4gH5HGiYg8iX303I3/70nVhKR1Ypul5b+8A2CREGrER2079pCjJUAI+/HkgrbriMKKK/kuW8jA5gzp6+SLgWAxLzEt70cy+btqpHtCS4nczuarQ5kr15xNZgb+XvaB+GIIhYJ3AZheIpZoeVi2RsrZYV7ccRiojd82lrhwI09Hzk9hM8vLOcC0nPI0+/yp0R7kbSJgLivc80WiVwFOFsyrqPU6QphiFWeZjDGbo7GWcZ1bDsaIRp2xyToLoE44NqLu50bUTzfnWo/ejbJJWd78TG6tJGQeYI5MA9Xa425Lx/srmW4HfSjl5CCZgldiwRjThSJzAxfqnR9D2uK87mCrVP90o5SaHc24UIBx+mzgEtg2lD1W07o04TjaY/s4vCTcBbGiJSGzvqDD8UxByOeTSoDke72lRoDBd5vShei64YuK9YN0e86CiHgKlHXShx2Ha1sj57RDJcAM8ZGc=) 2025-09-19 11:06:48.324711 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBzxCd5lTK+R7UtVyYIuu4E0mlKa2qSyybGSICCcx7rlq7QM9FHtL5wj6NtFx6A23s/XlagV16bboKUGFfH43us=) 2025-09-19 11:06:48.324725 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPEnJiWDderW2c/T4vOtQxIL5Kx7++iUYzpHFULrSlsW) 2025-09-19 11:06:48.324737 | orchestrator | 2025-09-19 11:06:48.324748 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:06:48.324758 | orchestrator | Friday 19 September 2025 11:06:42 +0000 (0:00:01.205) 0:00:07.470 ****** 2025-09-19 11:06:48.324769 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKywT5+4enzMjP7Ekxy2Af762Fw4atLkTXJtlMLFqKIlx0u0M3gfoIeUrpCfUJngqU3+0bjjlGp/RkI7nznPdUs=) 2025-09-19 11:06:48.324780 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAUhblGTtAS0t+1dSRLEkVxv7TRop0oFN8+Hf2y1jrf2) 2025-09-19 11:06:48.324817 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8KIOUiNofDkS1EoO15Zd4KtYS9kfTcgCDYlNVgEj6l73WVpmB2HjPt6CBDFYhGv05UXNAKQKwy6CHU9CQrwXrDULrUNefv+2L/mv8odQn/EzKek3x8HIM52GpPgwEJ80DON5lZTk8Uh7c8MDwVAzBuoPkGiI1GmscnvwGvre7Vb+L02vczWI00oSYO9kiqsZ6a4DItl8DzG99SS5ZzI0sOtiObJjcvI9HnJgBOtGp034UpOBvrKt6t8psuvDSUXSk+B/+OFq1Wyyds4jyxU0gLu1xASrgRu5sYEXhxa8rWOZ6Tm9T61MRFGmLbIxRJQsP/wBT7AaljLbloPZip3tbeWdGHD9bAc+/BID+RlmTMIMr5Pu7oD+x6Ux5hRW2sSSKEJRezyyI7boEziTwtfDUujrJMbUnWuE487roineQ1V288kTqiUPaivVV2Bky8oJRc79Pb5A3s0V75UI5TTKIfXq0NvmTPUcT3aibYg+MRvUxYwmpCMM8GOObUWnSlQE=) 2025-09-19 11:06:48.324879 | orchestrator | 2025-09-19 11:06:48.324893 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:06:48.324907 | orchestrator | Friday 19 September 2025 11:06:44 +0000 (0:00:01.081) 0:00:08.551 ****** 2025-09-19 11:06:48.324921 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDsMCbZnMxJmNYRfCXHfJ/JtQm8S4oZH++dU/f5RPcFuXLx01df7F2SdolpFVVZWxrxhKf0iZ0IlFNXcWPBUa2zSRIBhVRuxkeQ7YxHG9Eel7tbVAJpzgdN8X61zz1r7ZZiIsxvAILBbL9Qd6jn8xJQ6xXO6fIZWA9PlVCjFBgmf5HCXDOwXVUZA8A0p9Y8aAwTHhFxh2c0OdmP+isx+RVxBepLTB6SWs5mxeRXe4qPzY22Voh0cxoekG9XHzDE2FiZe5lwYTnO5Zvdex0KvFaHObwAckSEl3eOPot6Cag+aLp0ujcPMQJ1VDrCvTosUcHHzQ6LtpjNK2G7VMXIM+SJG0RCXqPgJ5/e9R5hYR/xbULKxwdGgRC70+YZ4a8k1l5PPoh20c/IJwJCtphxwb2eiV+btCt4UVxLHjK9PnMhr3BJvZpKK8JDhhBRbMm7OL5SFu66g+vlGLgqiAUCa+l8HiRvBLri93gOgqvzVJZU4a2/SjZ1oH/btYz0plB63S0=) 2025-09-19 11:06:48.324934 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBClXXcRjsioyeCosat89+BlWV8uyGGyYfsimn+A52gF3dLzT2LZIjoWJt6himOrQsvenSJRjFS8bjBFMgNNxbdo=) 2025-09-19 11:06:48.324973 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGgzJejKo5nymIllS3NQABe2Iza3IM88A90MPK1Ui6n0) 2025-09-19 11:06:48.324986 | orchestrator | 2025-09-19 11:06:48.324999 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:06:48.325011 | orchestrator | Friday 19 September 2025 11:06:45 +0000 (0:00:01.073) 0:00:09.625 ****** 2025-09-19 11:06:48.325091 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNcCCjxrBI7M+VWDNazK/8XtVqnYPOw/mdfUXGh2hoR+EpAYS8+r4RoEad6swXzQVrCcZpr6unjhRIrlnOj/oCB7uoITm/7xo7K/LdzJ6e8Eaj1JoUxRkM51gdfK5eWrJ3VHqtsamr1gDQxqOjYRbyqGA1cyn1cHdRhAVY4lkB7iAPL7Oy3qnuuNaoHsd0WkCapk1B+KO/0Rr8fsmJaCdmBglTZoN6cbkQvCSwcm+nyv/4iuv+/GCOKbmm/vT2yDA6OV7HBJBRvcwyIPP4V3VatAqlo24P6YrKQiPaRhPH7DPie6ZFpyu55F/LE6OjUfI+yooBKwPPKmoM02fEKCYGtszhCEP0rPvudcIRNrCDdu7U1jtxnuLPyH7+F8pyv5ct8l9TKTdMCf1jCK9jXYKDPsEjv0l/EJv305RfcLjXXxrHSt/0fxeuoGiu51Lf462Y/qiXtOCngunvon7xjkEnURULWStVgheTPRwnfcYo4dyBUGs9rPve5GOBB/NhUYc=) 2025-09-19 11:06:48.325106 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNmFAWXuRgfO8zohMsGFE2R1AYcFVBeQ4iKYwQ93i/WO/FLjthXzme5ZEJmZUGprv7miNrxk4607Naon4mVZ7pY=) 2025-09-19 11:06:48.325118 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID8yJxAipH3hukm5I8+7Z+TKiOB5dVUJg2MUStfLYZWy) 2025-09-19 11:06:48.325129 | orchestrator | 2025-09-19 11:06:48.325140 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:06:48.325150 | orchestrator | Friday 19 September 2025 11:06:46 +0000 (0:00:01.064) 0:00:10.689 ****** 2025-09-19 11:06:48.325161 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ7rykE2KUSx6mkHYyh5VMToaZhTQr4te+br0h56bZGI) 2025-09-19 11:06:48.325172 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyrMtPeNf5ib3Qt7l6AWid+GyY3j5qLULIDo5APigZ6mq99OxARYPr1TCMerqfxmghTGHed9XnqJ0955uUeoK5hvbB2B5KFYucpDzfeqKQVtqin+/qZTjBgvkGt3MwwNTR437aAFz5k+2udHSbzNASQNl/5zv6tGqd5ogfcKEw2PkFVSkMhcnUFhJ1CBYdAGXldgaeup28ZTb4og/Jnp+r6p79h8NqdC+po72amOcidvnriP1PLftVQx2/QXTdI6ELl4kc1mTQfgNF7Wv/c3SivVjF9K9Qfw06UJaOI4L+F46Xi0nePmwJKEwfEUQzirabsQjbK5cpUn7YOK2KKjL/ET6viigVOsmLXrFCwe3l7cd7Wa0tOrJclTEGYPzzNXJxBafpcwfNHlb1343k5+MBFDh3hDcmsCvkaaBFqOsqnkMw5VRISuTkCQLnbKmEYKT0kjUVMS1eNAMt2AoU31kHvroyzHzPH9nHlbnxVWLhEsqGrBt2cA2z1b/8o4EiOPU=) 2025-09-19 11:06:48.325183 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPsv75i2LxuD2qAIzAr5mczUX+P951BqF6B7OkpD790e8jMjOiMe4eAvL7DAMDBB/bvKU7r3goMRFXXdKfpXrQo=) 2025-09-19 11:06:48.325203 | orchestrator | 2025-09-19 11:06:48.325214 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:06:48.325225 | orchestrator | Friday 19 September 2025 11:06:47 +0000 (0:00:01.072) 0:00:11.762 ****** 2025-09-19 11:06:48.325245 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXGMm9WHvsoipgvN+B7lbe4Y+9AcL6g6gN8CeNtKWTK9LEc5q8zb07hsKkdWjHHOfhCyfdZlMW3PDpkAxkaGx90p5g9m8Yout4ucYqjzXPrnbPDC9YoVurRoY+ugiGb0kfUcCq4LvO5ObHMDa79kw5sESQ20H4Le3oORsQi+HVhw1sHWaLemw1UCEXCAn7q/CXvjLuRRyGS0D5Lc+DyXMqrXHGRm/aUlKx1/To6Fcippzo5XJAavjUp/E/1oObxuXyXcump3kBobHGI0HaNOHNoOY+axBYVv16WTlJYCds8PG3ee/6J7eZ/JqxOPdUnM1ZggVKitHo2Y/ci4/BuCamnDS9+71eN831yr3Q1WieeWJFNhujnt1wlJV27dLZUSbhP9JGjZwnvom7IATeTAFm/W1oRkYH+yyPR6sk2tPTG7Ag9xkuKumVcDOh2/1shsc7pLhQE5VkrpQeXz0f5fAUHiGD+2FXmu93b+bJUNTwJRt5mV12sFQLE/nTHK0NZuE=) 2025-09-19 11:06:59.110903 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPGlf/D7NECRXcmhGrdwhL8DHpJ6L068hyRf1+qunufnGyMnPPaoA4DuVYImviAKFgwuStcfPliBDwco430T8bQ=) 2025-09-19 11:06:59.111140 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINtp664ZX5Z5SpgY7LAp4PHmxP5vjdirV0pAaAcHIKXV) 2025-09-19 11:06:59.111165 | orchestrator | 2025-09-19 11:06:59.111184 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:06:59.111248 | orchestrator | Friday 19 September 2025 11:06:48 +0000 (0:00:01.044) 0:00:12.807 ****** 2025-09-19 11:06:59.111267 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEoWd9jfVkIO08GtbYx0ltK93042SGkRiavEkeHBbC4/) 2025-09-19 11:06:59.111286 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzoWQCzvlOyFInMIXHWCElZOysR10VONCo9M1eDB3gLEv9ntRJ9c5SJNtRtk+zoJivaGSrx2jpPFNX3ZFvYPuA1ZkmT35DSDq/rijnYHRxMVSZDBm8I2fGmb9Spp+waM/O6Dcw1qgLXbO4Zj3EtZ2korFrXwbuI6ZdK1BUJO3ytbiQ7qPaFkFVK2y8tqpsoUh0ct5t5h7Uo+AjpCFJxn3K6FaEjf0+sd1BO4+EufpA93H41xUtmy2VviVfjtiUzIFshN2ajD0q6QPsOa1hmbGIBSqV3XtQXiXi7n9wfucXW9z6s/ErntQ0aglBXeMCNFL+p4fYpTZkiEP2Mj6+SLDOe/hialvli6ofsbQX9iJyzEMG65Hevju5lCqb1ePGuNPl8VUObjoi39ykKo4URsDt+VYZTgoHSxV1bwtovJxS/ZU2c+h8V83gWlHwdvg8eOTDxXl9wUP/hCpt7IDoOLLhp8MSyuqgjqY8hQcEXUE/ff9zgyuw4+iIrJqg+4UXOWs=) 2025-09-19 11:06:59.111306 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJfknqcneU6O8mgyR91EbcI4ORfi1he9aseS6RYTLv4waMrqsFJBaynTMWzg2vOQIUMh3gC1U5GcEkxI+V7Rak0=) 2025-09-19 11:06:59.111323 | orchestrator | 2025-09-19 11:06:59.111340 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-19 11:06:59.111357 | orchestrator | Friday 19 September 2025 11:06:49 +0000 (0:00:01.075) 0:00:13.882 ****** 2025-09-19 11:06:59.111374 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-19 11:06:59.111391 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-19 11:06:59.111407 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-19 11:06:59.111424 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-19 11:06:59.111442 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-19 11:06:59.111460 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-19 11:06:59.111477 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-19 11:06:59.111495 | orchestrator | 2025-09-19 11:06:59.111513 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-19 11:06:59.111551 | orchestrator | Friday 19 September 2025 11:06:54 +0000 (0:00:05.248) 0:00:19.130 ****** 2025-09-19 11:06:59.111570 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-19 11:06:59.111590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-19 11:06:59.111635 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-19 11:06:59.111654 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-19 11:06:59.111671 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-19 11:06:59.111689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-19 11:06:59.111707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-19 11:06:59.111724 | orchestrator | 2025-09-19 11:06:59.111741 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:06:59.111757 | orchestrator | Friday 19 September 2025 11:06:54 +0000 (0:00:00.166) 0:00:19.297 ****** 2025-09-19 11:06:59.111773 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPEnJiWDderW2c/T4vOtQxIL5Kx7++iUYzpHFULrSlsW) 2025-09-19 11:06:59.111818 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbUoI60Kc0UDzX6achPD6/yq4jPtpGaINENt9ba87I3F7RM4gH5HGiYg8iX303I3/70nVhKR1Ypul5b+8A2CREGrER2079pCjJUAI+/HkgrbriMKKK/kuW8jA5gzp6+SLgWAxLzEt70cy+btqpHtCS4nczuarQ5kr15xNZgb+XvaB+GIIhYJ3AZheIpZoeVi2RsrZYV7ccRiojd82lrhwI09Hzk9hM8vLOcC0nPI0+/yp0R7kbSJgLivc80WiVwFOFsyrqPU6QphiFWeZjDGbo7GWcZ1bDsaIRp2xyToLoE44NqLu50bUTzfnWo/ejbJJWd78TG6tJGQeYI5MA9Xa425Lx/srmW4HfSjl5CCZgldiwRjThSJzAxfqnR9D2uK87mCrVP90o5SaHc24UIBx+mzgEtg2lD1W07o04TjaY/s4vCTcBbGiJSGzvqDD8UxByOeTSoDke72lRoDBd5vShei64YuK9YN0e86CiHgKlHXShx2Ha1sj57RDJcAM8ZGc=) 2025-09-19 11:06:59.111836 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBzxCd5lTK+R7UtVyYIuu4E0mlKa2qSyybGSICCcx7rlq7QM9FHtL5wj6NtFx6A23s/XlagV16bboKUGFfH43us=) 2025-09-19 11:06:59.111852 | orchestrator | 2025-09-19 11:06:59.111868 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:06:59.111884 | orchestrator | Friday 19 September 2025 11:06:55 +0000 (0:00:01.076) 0:00:20.374 ****** 2025-09-19 11:06:59.111901 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8KIOUiNofDkS1EoO15Zd4KtYS9kfTcgCDYlNVgEj6l73WVpmB2HjPt6CBDFYhGv05UXNAKQKwy6CHU9CQrwXrDULrUNefv+2L/mv8odQn/EzKek3x8HIM52GpPgwEJ80DON5lZTk8Uh7c8MDwVAzBuoPkGiI1GmscnvwGvre7Vb+L02vczWI00oSYO9kiqsZ6a4DItl8DzG99SS5ZzI0sOtiObJjcvI9HnJgBOtGp034UpOBvrKt6t8psuvDSUXSk+B/+OFq1Wyyds4jyxU0gLu1xASrgRu5sYEXhxa8rWOZ6Tm9T61MRFGmLbIxRJQsP/wBT7AaljLbloPZip3tbeWdGHD9bAc+/BID+RlmTMIMr5Pu7oD+x6Ux5hRW2sSSKEJRezyyI7boEziTwtfDUujrJMbUnWuE487roineQ1V288kTqiUPaivVV2Bky8oJRc79Pb5A3s0V75UI5TTKIfXq0NvmTPUcT3aibYg+MRvUxYwmpCMM8GOObUWnSlQE=) 2025-09-19 11:06:59.111918 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKywT5+4enzMjP7Ekxy2Af762Fw4atLkTXJtlMLFqKIlx0u0M3gfoIeUrpCfUJngqU3+0bjjlGp/RkI7nznPdUs=) 2025-09-19 11:06:59.111960 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAUhblGTtAS0t+1dSRLEkVxv7TRop0oFN8+Hf2y1jrf2) 2025-09-19 11:06:59.111977 | orchestrator | 2025-09-19 11:06:59.111994 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:06:59.112011 | orchestrator | Friday 19 September 2025 11:06:56 +0000 (0:00:01.069) 0:00:21.444 ****** 2025-09-19 11:06:59.112041 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBClXXcRjsioyeCosat89+BlWV8uyGGyYfsimn+A52gF3dLzT2LZIjoWJt6himOrQsvenSJRjFS8bjBFMgNNxbdo=) 2025-09-19 11:06:59.112058 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDsMCbZnMxJmNYRfCXHfJ/JtQm8S4oZH++dU/f5RPcFuXLx01df7F2SdolpFVVZWxrxhKf0iZ0IlFNXcWPBUa2zSRIBhVRuxkeQ7YxHG9Eel7tbVAJpzgdN8X61zz1r7ZZiIsxvAILBbL9Qd6jn8xJQ6xXO6fIZWA9PlVCjFBgmf5HCXDOwXVUZA8A0p9Y8aAwTHhFxh2c0OdmP+isx+RVxBepLTB6SWs5mxeRXe4qPzY22Voh0cxoekG9XHzDE2FiZe5lwYTnO5Zvdex0KvFaHObwAckSEl3eOPot6Cag+aLp0ujcPMQJ1VDrCvTosUcHHzQ6LtpjNK2G7VMXIM+SJG0RCXqPgJ5/e9R5hYR/xbULKxwdGgRC70+YZ4a8k1l5PPoh20c/IJwJCtphxwb2eiV+btCt4UVxLHjK9PnMhr3BJvZpKK8JDhhBRbMm7OL5SFu66g+vlGLgqiAUCa+l8HiRvBLri93gOgqvzVJZU4a2/SjZ1oH/btYz0plB63S0=) 2025-09-19 11:06:59.112072 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGgzJejKo5nymIllS3NQABe2Iza3IM88A90MPK1Ui6n0) 2025-09-19 11:06:59.112085 | orchestrator | 2025-09-19 11:06:59.112098 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:06:59.112111 | orchestrator | Friday 19 September 2025 11:06:58 +0000 (0:00:01.049) 0:00:22.494 ****** 2025-09-19 11:06:59.112125 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID8yJxAipH3hukm5I8+7Z+TKiOB5dVUJg2MUStfLYZWy) 2025-09-19 11:06:59.112149 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNcCCjxrBI7M+VWDNazK/8XtVqnYPOw/mdfUXGh2hoR+EpAYS8+r4RoEad6swXzQVrCcZpr6unjhRIrlnOj/oCB7uoITm/7xo7K/LdzJ6e8Eaj1JoUxRkM51gdfK5eWrJ3VHqtsamr1gDQxqOjYRbyqGA1cyn1cHdRhAVY4lkB7iAPL7Oy3qnuuNaoHsd0WkCapk1B+KO/0Rr8fsmJaCdmBglTZoN6cbkQvCSwcm+nyv/4iuv+/GCOKbmm/vT2yDA6OV7HBJBRvcwyIPP4V3VatAqlo24P6YrKQiPaRhPH7DPie6ZFpyu55F/LE6OjUfI+yooBKwPPKmoM02fEKCYGtszhCEP0rPvudcIRNrCDdu7U1jtxnuLPyH7+F8pyv5ct8l9TKTdMCf1jCK9jXYKDPsEjv0l/EJv305RfcLjXXxrHSt/0fxeuoGiu51Lf462Y/qiXtOCngunvon7xjkEnURULWStVgheTPRwnfcYo4dyBUGs9rPve5GOBB/NhUYc=) 2025-09-19 11:06:59.112173 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNmFAWXuRgfO8zohMsGFE2R1AYcFVBeQ4iKYwQ93i/WO/FLjthXzme5ZEJmZUGprv7miNrxk4607Naon4mVZ7pY=) 2025-09-19 11:07:03.307586 | orchestrator | 2025-09-19 11:07:03.307688 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:07:03.307705 | orchestrator | Friday 19 September 2025 11:06:59 +0000 (0:00:01.100) 0:00:23.594 ****** 2025-09-19 11:07:03.307720 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyrMtPeNf5ib3Qt7l6AWid+GyY3j5qLULIDo5APigZ6mq99OxARYPr1TCMerqfxmghTGHed9XnqJ0955uUeoK5hvbB2B5KFYucpDzfeqKQVtqin+/qZTjBgvkGt3MwwNTR437aAFz5k+2udHSbzNASQNl/5zv6tGqd5ogfcKEw2PkFVSkMhcnUFhJ1CBYdAGXldgaeup28ZTb4og/Jnp+r6p79h8NqdC+po72amOcidvnriP1PLftVQx2/QXTdI6ELl4kc1mTQfgNF7Wv/c3SivVjF9K9Qfw06UJaOI4L+F46Xi0nePmwJKEwfEUQzirabsQjbK5cpUn7YOK2KKjL/ET6viigVOsmLXrFCwe3l7cd7Wa0tOrJclTEGYPzzNXJxBafpcwfNHlb1343k5+MBFDh3hDcmsCvkaaBFqOsqnkMw5VRISuTkCQLnbKmEYKT0kjUVMS1eNAMt2AoU31kHvroyzHzPH9nHlbnxVWLhEsqGrBt2cA2z1b/8o4EiOPU=) 2025-09-19 11:07:03.307735 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPsv75i2LxuD2qAIzAr5mczUX+P951BqF6B7OkpD790e8jMjOiMe4eAvL7DAMDBB/bvKU7r3goMRFXXdKfpXrQo=) 2025-09-19 11:07:03.307748 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ7rykE2KUSx6mkHYyh5VMToaZhTQr4te+br0h56bZGI) 2025-09-19 11:07:03.307760 | orchestrator | 2025-09-19 11:07:03.307771 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:07:03.307782 | orchestrator | Friday 19 September 2025 11:07:00 +0000 (0:00:01.045) 0:00:24.640 ****** 2025-09-19 11:07:03.307793 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPGlf/D7NECRXcmhGrdwhL8DHpJ6L068hyRf1+qunufnGyMnPPaoA4DuVYImviAKFgwuStcfPliBDwco430T8bQ=) 2025-09-19 11:07:03.307830 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXGMm9WHvsoipgvN+B7lbe4Y+9AcL6g6gN8CeNtKWTK9LEc5q8zb07hsKkdWjHHOfhCyfdZlMW3PDpkAxkaGx90p5g9m8Yout4ucYqjzXPrnbPDC9YoVurRoY+ugiGb0kfUcCq4LvO5ObHMDa79kw5sESQ20H4Le3oORsQi+HVhw1sHWaLemw1UCEXCAn7q/CXvjLuRRyGS0D5Lc+DyXMqrXHGRm/aUlKx1/To6Fcippzo5XJAavjUp/E/1oObxuXyXcump3kBobHGI0HaNOHNoOY+axBYVv16WTlJYCds8PG3ee/6J7eZ/JqxOPdUnM1ZggVKitHo2Y/ci4/BuCamnDS9+71eN831yr3Q1WieeWJFNhujnt1wlJV27dLZUSbhP9JGjZwnvom7IATeTAFm/W1oRkYH+yyPR6sk2tPTG7Ag9xkuKumVcDOh2/1shsc7pLhQE5VkrpQeXz0f5fAUHiGD+2FXmu93b+bJUNTwJRt5mV12sFQLE/nTHK0NZuE=) 2025-09-19 11:07:03.307842 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINtp664ZX5Z5SpgY7LAp4PHmxP5vjdirV0pAaAcHIKXV) 2025-09-19 11:07:03.307853 | orchestrator | 2025-09-19 11:07:03.307864 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:07:03.307875 | orchestrator | Friday 19 September 2025 11:07:01 +0000 (0:00:01.100) 0:00:25.740 ****** 2025-09-19 11:07:03.307927 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzoWQCzvlOyFInMIXHWCElZOysR10VONCo9M1eDB3gLEv9ntRJ9c5SJNtRtk+zoJivaGSrx2jpPFNX3ZFvYPuA1ZkmT35DSDq/rijnYHRxMVSZDBm8I2fGmb9Spp+waM/O6Dcw1qgLXbO4Zj3EtZ2korFrXwbuI6ZdK1BUJO3ytbiQ7qPaFkFVK2y8tqpsoUh0ct5t5h7Uo+AjpCFJxn3K6FaEjf0+sd1BO4+EufpA93H41xUtmy2VviVfjtiUzIFshN2ajD0q6QPsOa1hmbGIBSqV3XtQXiXi7n9wfucXW9z6s/ErntQ0aglBXeMCNFL+p4fYpTZkiEP2Mj6+SLDOe/hialvli6ofsbQX9iJyzEMG65Hevju5lCqb1ePGuNPl8VUObjoi39ykKo4URsDt+VYZTgoHSxV1bwtovJxS/ZU2c+h8V83gWlHwdvg8eOTDxXl9wUP/hCpt7IDoOLLhp8MSyuqgjqY8hQcEXUE/ff9zgyuw4+iIrJqg+4UXOWs=) 2025-09-19 11:07:03.307999 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJfknqcneU6O8mgyR91EbcI4ORfi1he9aseS6RYTLv4waMrqsFJBaynTMWzg2vOQIUMh3gC1U5GcEkxI+V7Rak0=) 2025-09-19 11:07:03.308011 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEoWd9jfVkIO08GtbYx0ltK93042SGkRiavEkeHBbC4/) 2025-09-19 11:07:03.308022 | orchestrator | 2025-09-19 11:07:03.308033 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-19 11:07:03.308043 | orchestrator | Friday 19 September 2025 11:07:02 +0000 (0:00:01.027) 0:00:26.768 ****** 2025-09-19 11:07:03.308060 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-19 11:07:03.308084 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 11:07:03.308110 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-19 11:07:03.308127 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-19 11:07:03.308144 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-19 11:07:03.308161 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-19 11:07:03.308179 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-19 11:07:03.308195 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:07:03.308211 | orchestrator | 2025-09-19 11:07:03.308253 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-19 11:07:03.308271 | orchestrator | Friday 19 September 2025 11:07:02 +0000 (0:00:00.162) 0:00:26.930 ****** 2025-09-19 11:07:03.308288 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:07:03.308305 | orchestrator | 2025-09-19 11:07:03.308324 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-19 11:07:03.308341 | orchestrator | Friday 19 September 2025 11:07:02 +0000 (0:00:00.059) 0:00:26.989 ****** 2025-09-19 11:07:03.308360 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:07:03.308377 | orchestrator | 2025-09-19 11:07:03.308396 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-19 11:07:03.308416 | orchestrator | Friday 19 September 2025 11:07:02 +0000 (0:00:00.043) 0:00:27.033 ****** 2025-09-19 11:07:03.308491 | orchestrator | changed: [testbed-manager] 2025-09-19 11:07:03.308517 | orchestrator | 2025-09-19 11:07:03.308537 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:07:03.308558 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 11:07:03.308580 | orchestrator | 2025-09-19 11:07:03.308591 | orchestrator | 2025-09-19 11:07:03.308602 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:07:03.308612 | orchestrator | Friday 19 September 2025 11:07:03 +0000 (0:00:00.518) 0:00:27.551 ****** 2025-09-19 11:07:03.308623 | orchestrator | =============================================================================== 2025-09-19 11:07:03.308679 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.91s 2025-09-19 11:07:03.308693 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.25s 2025-09-19 11:07:03.308705 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-09-19 11:07:03.308715 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-19 11:07:03.308761 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-19 11:07:03.308774 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-19 11:07:03.308785 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-19 11:07:03.308796 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-19 11:07:03.308806 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-19 11:07:03.308817 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-19 11:07:03.308827 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-19 11:07:03.308838 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-19 11:07:03.308848 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-19 11:07:03.308859 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-19 11:07:03.308869 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-19 11:07:03.308882 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-19 11:07:03.308901 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.52s 2025-09-19 11:07:03.308968 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-09-19 11:07:03.308988 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-09-19 11:07:03.309006 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-09-19 11:07:03.592914 | orchestrator | + osism apply squid 2025-09-19 11:07:15.596732 | orchestrator | 2025-09-19 11:07:15 | INFO  | Task 3714dc96-eade-4ff8-b19a-e14cb78b7d95 (squid) was prepared for execution. 2025-09-19 11:07:15.596845 | orchestrator | 2025-09-19 11:07:15 | INFO  | It takes a moment until task 3714dc96-eade-4ff8-b19a-e14cb78b7d95 (squid) has been started and output is visible here. 2025-09-19 11:09:09.464691 | orchestrator | 2025-09-19 11:09:09.464983 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-19 11:09:09.465019 | orchestrator | 2025-09-19 11:09:09.465042 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-19 11:09:09.465056 | orchestrator | Friday 19 September 2025 11:07:19 +0000 (0:00:00.165) 0:00:00.165 ****** 2025-09-19 11:09:09.465087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 11:09:09.465100 | orchestrator | 2025-09-19 11:09:09.465111 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-19 11:09:09.465149 | orchestrator | Friday 19 September 2025 11:07:19 +0000 (0:00:00.090) 0:00:00.256 ****** 2025-09-19 11:09:09.465160 | orchestrator | ok: [testbed-manager] 2025-09-19 11:09:09.465172 | orchestrator | 2025-09-19 11:09:09.465183 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-19 11:09:09.465194 | orchestrator | Friday 19 September 2025 11:07:21 +0000 (0:00:01.436) 0:00:01.692 ****** 2025-09-19 11:09:09.465205 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-19 11:09:09.465216 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-19 11:09:09.465227 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-19 11:09:09.465237 | orchestrator | 2025-09-19 11:09:09.465248 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-19 11:09:09.465258 | orchestrator | Friday 19 September 2025 11:07:22 +0000 (0:00:01.163) 0:00:02.856 ****** 2025-09-19 11:09:09.465269 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-19 11:09:09.465280 | orchestrator | 2025-09-19 11:09:09.465291 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-19 11:09:09.465301 | orchestrator | Friday 19 September 2025 11:07:23 +0000 (0:00:01.004) 0:00:03.860 ****** 2025-09-19 11:09:09.465312 | orchestrator | ok: [testbed-manager] 2025-09-19 11:09:09.465322 | orchestrator | 2025-09-19 11:09:09.465333 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-19 11:09:09.465344 | orchestrator | Friday 19 September 2025 11:07:23 +0000 (0:00:00.351) 0:00:04.212 ****** 2025-09-19 11:09:09.465354 | orchestrator | changed: [testbed-manager] 2025-09-19 11:09:09.465365 | orchestrator | 2025-09-19 11:09:09.465375 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-19 11:09:09.465386 | orchestrator | Friday 19 September 2025 11:07:24 +0000 (0:00:00.903) 0:00:05.116 ****** 2025-09-19 11:09:09.465396 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-19 11:09:09.465407 | orchestrator | ok: [testbed-manager] 2025-09-19 11:09:09.465418 | orchestrator | 2025-09-19 11:09:09.465428 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-19 11:09:09.465439 | orchestrator | Friday 19 September 2025 11:07:56 +0000 (0:00:31.782) 0:00:36.898 ****** 2025-09-19 11:09:09.465449 | orchestrator | changed: [testbed-manager] 2025-09-19 11:09:09.465460 | orchestrator | 2025-09-19 11:09:09.465471 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-19 11:09:09.465481 | orchestrator | Friday 19 September 2025 11:08:08 +0000 (0:00:12.140) 0:00:49.038 ****** 2025-09-19 11:09:09.465493 | orchestrator | Pausing for 60 seconds 2025-09-19 11:09:09.465503 | orchestrator | changed: [testbed-manager] 2025-09-19 11:09:09.465516 | orchestrator | 2025-09-19 11:09:09.465529 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-19 11:09:09.465541 | orchestrator | Friday 19 September 2025 11:09:08 +0000 (0:01:00.069) 0:01:49.108 ****** 2025-09-19 11:09:09.465554 | orchestrator | ok: [testbed-manager] 2025-09-19 11:09:09.465565 | orchestrator | 2025-09-19 11:09:09.465577 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-19 11:09:09.465590 | orchestrator | Friday 19 September 2025 11:09:08 +0000 (0:00:00.067) 0:01:49.175 ****** 2025-09-19 11:09:09.465602 | orchestrator | changed: [testbed-manager] 2025-09-19 11:09:09.465614 | orchestrator | 2025-09-19 11:09:09.465626 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:09:09.465639 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:09:09.465651 | orchestrator | 2025-09-19 11:09:09.465664 | orchestrator | 2025-09-19 11:09:09.465675 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:09:09.465688 | orchestrator | Friday 19 September 2025 11:09:09 +0000 (0:00:00.667) 0:01:49.843 ****** 2025-09-19 11:09:09.465712 | orchestrator | =============================================================================== 2025-09-19 11:09:09.465725 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-09-19 11:09:09.465760 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.78s 2025-09-19 11:09:09.465773 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.14s 2025-09-19 11:09:09.465785 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.44s 2025-09-19 11:09:09.465797 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.16s 2025-09-19 11:09:09.465809 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.00s 2025-09-19 11:09:09.465822 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.90s 2025-09-19 11:09:09.465834 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.67s 2025-09-19 11:09:09.465846 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-09-19 11:09:09.465858 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-09-19 11:09:09.465870 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-09-19 11:09:09.755065 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 11:09:09.756069 | orchestrator | ++ semver latest 9.0.0 2025-09-19 11:09:09.817102 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-19 11:09:09.817215 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 11:09:09.818624 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-19 11:09:21.804763 | orchestrator | 2025-09-19 11:09:21 | INFO  | Task e059ff77-c46f-407f-826d-008f0294a9e7 (operator) was prepared for execution. 2025-09-19 11:09:21.804836 | orchestrator | 2025-09-19 11:09:21 | INFO  | It takes a moment until task e059ff77-c46f-407f-826d-008f0294a9e7 (operator) has been started and output is visible here. 2025-09-19 11:09:37.500404 | orchestrator | 2025-09-19 11:09:37.500538 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-19 11:09:37.500552 | orchestrator | 2025-09-19 11:09:37.500563 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 11:09:37.500573 | orchestrator | Friday 19 September 2025 11:09:25 +0000 (0:00:00.148) 0:00:00.148 ****** 2025-09-19 11:09:37.500583 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:09:37.500595 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:09:37.500604 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:09:37.500614 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:09:37.500624 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:09:37.500657 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:09:37.500667 | orchestrator | 2025-09-19 11:09:37.500677 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-19 11:09:37.500687 | orchestrator | Friday 19 September 2025 11:09:29 +0000 (0:00:03.438) 0:00:03.586 ****** 2025-09-19 11:09:37.500731 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:09:37.500741 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:09:37.500751 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:09:37.500760 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:09:37.500770 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:09:37.500779 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:09:37.500788 | orchestrator | 2025-09-19 11:09:37.500798 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-19 11:09:37.500808 | orchestrator | 2025-09-19 11:09:37.500818 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-19 11:09:37.500828 | orchestrator | Friday 19 September 2025 11:09:29 +0000 (0:00:00.711) 0:00:04.298 ****** 2025-09-19 11:09:37.500838 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:09:37.500847 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:09:37.500857 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:09:37.500866 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:09:37.500876 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:09:37.500885 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:09:37.500921 | orchestrator | 2025-09-19 11:09:37.500933 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-19 11:09:37.500945 | orchestrator | Friday 19 September 2025 11:09:30 +0000 (0:00:00.175) 0:00:04.473 ****** 2025-09-19 11:09:37.500955 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:09:37.500966 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:09:37.500977 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:09:37.500988 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:09:37.500999 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:09:37.501009 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:09:37.501020 | orchestrator | 2025-09-19 11:09:37.501032 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-19 11:09:37.501043 | orchestrator | Friday 19 September 2025 11:09:30 +0000 (0:00:00.182) 0:00:04.656 ****** 2025-09-19 11:09:37.501054 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:09:37.501066 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:09:37.501077 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:09:37.501088 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:09:37.501099 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:09:37.501111 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:09:37.501123 | orchestrator | 2025-09-19 11:09:37.501133 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-19 11:09:37.501145 | orchestrator | Friday 19 September 2025 11:09:30 +0000 (0:00:00.558) 0:00:05.214 ****** 2025-09-19 11:09:37.501155 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:09:37.501166 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:09:37.501177 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:09:37.501188 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:09:37.501199 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:09:37.501210 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:09:37.501221 | orchestrator | 2025-09-19 11:09:37.501232 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-19 11:09:37.501243 | orchestrator | Friday 19 September 2025 11:09:31 +0000 (0:00:00.886) 0:00:06.101 ****** 2025-09-19 11:09:37.501254 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-19 11:09:37.501266 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-19 11:09:37.501277 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-19 11:09:37.501287 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-19 11:09:37.501296 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-19 11:09:37.501305 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-19 11:09:37.501315 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-19 11:09:37.501324 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-19 11:09:37.501333 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-19 11:09:37.501343 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-19 11:09:37.501352 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-19 11:09:37.501362 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-19 11:09:37.501371 | orchestrator | 2025-09-19 11:09:37.501381 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-19 11:09:37.501390 | orchestrator | Friday 19 September 2025 11:09:32 +0000 (0:00:01.095) 0:00:07.196 ****** 2025-09-19 11:09:37.501400 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:09:37.501409 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:09:37.501418 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:09:37.501428 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:09:37.501437 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:09:37.501446 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:09:37.501456 | orchestrator | 2025-09-19 11:09:37.501466 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-19 11:09:37.501476 | orchestrator | Friday 19 September 2025 11:09:33 +0000 (0:00:01.220) 0:00:08.416 ****** 2025-09-19 11:09:37.501486 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-19 11:09:37.501504 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-19 11:09:37.501514 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-19 11:09:37.501524 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 11:09:37.501553 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 11:09:37.501563 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 11:09:37.501572 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 11:09:37.501582 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 11:09:37.501591 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 11:09:37.501600 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-19 11:09:37.501610 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-19 11:09:37.501619 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-19 11:09:37.501628 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-19 11:09:37.501637 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-19 11:09:37.501646 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-19 11:09:37.501656 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-19 11:09:37.501665 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-19 11:09:37.501674 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-19 11:09:37.501684 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-19 11:09:37.501712 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-19 11:09:37.501722 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-19 11:09:37.501731 | orchestrator | 2025-09-19 11:09:37.501741 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-19 11:09:37.501751 | orchestrator | Friday 19 September 2025 11:09:35 +0000 (0:00:01.354) 0:00:09.771 ****** 2025-09-19 11:09:37.501760 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:09:37.501770 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:09:37.501779 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:09:37.501788 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:09:37.501798 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:09:37.501807 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:09:37.501816 | orchestrator | 2025-09-19 11:09:37.501825 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-19 11:09:37.501834 | orchestrator | Friday 19 September 2025 11:09:35 +0000 (0:00:00.198) 0:00:09.969 ****** 2025-09-19 11:09:37.501844 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:09:37.501853 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:09:37.501862 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:09:37.501872 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:09:37.501881 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:09:37.501890 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:09:37.501900 | orchestrator | 2025-09-19 11:09:37.501909 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-19 11:09:37.501918 | orchestrator | Friday 19 September 2025 11:09:36 +0000 (0:00:00.568) 0:00:10.537 ****** 2025-09-19 11:09:37.501928 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:09:37.501937 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:09:37.501946 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:09:37.501956 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:09:37.501965 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:09:37.501974 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:09:37.501984 | orchestrator | 2025-09-19 11:09:37.502004 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-19 11:09:37.502014 | orchestrator | Friday 19 September 2025 11:09:36 +0000 (0:00:00.277) 0:00:10.815 ****** 2025-09-19 11:09:37.502079 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:09:37.502094 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 11:09:37.502103 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:09:37.502113 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:09:37.502122 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-19 11:09:37.502132 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:09:37.502141 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 11:09:37.502150 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:09:37.502160 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 11:09:37.502169 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:09:37.502179 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-19 11:09:37.502188 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:09:37.502197 | orchestrator | 2025-09-19 11:09:37.502207 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-19 11:09:37.502216 | orchestrator | Friday 19 September 2025 11:09:37 +0000 (0:00:00.662) 0:00:11.478 ****** 2025-09-19 11:09:37.502225 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:09:37.502235 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:09:37.502244 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:09:37.502254 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:09:37.502263 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:09:37.502272 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:09:37.502282 | orchestrator | 2025-09-19 11:09:37.502291 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-19 11:09:37.502300 | orchestrator | Friday 19 September 2025 11:09:37 +0000 (0:00:00.149) 0:00:11.628 ****** 2025-09-19 11:09:37.502310 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:09:37.502319 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:09:37.502328 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:09:37.502338 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:09:37.502354 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:09:37.502364 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:09:37.502373 | orchestrator | 2025-09-19 11:09:37.502383 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-19 11:09:37.502392 | orchestrator | Friday 19 September 2025 11:09:37 +0000 (0:00:00.163) 0:00:11.792 ****** 2025-09-19 11:09:37.502406 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:09:37.502416 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:09:37.502425 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:09:37.502434 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:09:37.502451 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:09:38.573904 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:09:38.574013 | orchestrator | 2025-09-19 11:09:38.574086 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-19 11:09:38.574105 | orchestrator | Friday 19 September 2025 11:09:37 +0000 (0:00:00.151) 0:00:11.943 ****** 2025-09-19 11:09:38.574126 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:09:38.574146 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:09:38.574165 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:09:38.574184 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:09:38.574202 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:09:38.574220 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:09:38.574239 | orchestrator | 2025-09-19 11:09:38.574259 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-19 11:09:38.574280 | orchestrator | Friday 19 September 2025 11:09:38 +0000 (0:00:00.645) 0:00:12.589 ****** 2025-09-19 11:09:38.574301 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:09:38.574320 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:09:38.574340 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:09:38.574407 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:09:38.574429 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:09:38.574450 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:09:38.574470 | orchestrator | 2025-09-19 11:09:38.574491 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:09:38.574513 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:09:38.574536 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:09:38.574556 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:09:38.574576 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:09:38.574587 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:09:38.574598 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:09:38.574616 | orchestrator | 2025-09-19 11:09:38.574635 | orchestrator | 2025-09-19 11:09:38.574654 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:09:38.574671 | orchestrator | Friday 19 September 2025 11:09:38 +0000 (0:00:00.219) 0:00:12.808 ****** 2025-09-19 11:09:38.574719 | orchestrator | =============================================================================== 2025-09-19 11:09:38.574740 | orchestrator | Gathering Facts --------------------------------------------------------- 3.44s 2025-09-19 11:09:38.574759 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.35s 2025-09-19 11:09:38.574771 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.22s 2025-09-19 11:09:38.574782 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.10s 2025-09-19 11:09:38.574793 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.89s 2025-09-19 11:09:38.574803 | orchestrator | Do not require tty for all users ---------------------------------------- 0.71s 2025-09-19 11:09:38.574814 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.66s 2025-09-19 11:09:38.574825 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2025-09-19 11:09:38.574835 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2025-09-19 11:09:38.574845 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.56s 2025-09-19 11:09:38.574857 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.28s 2025-09-19 11:09:38.574868 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-09-19 11:09:38.574878 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.20s 2025-09-19 11:09:38.574889 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2025-09-19 11:09:38.574900 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-09-19 11:09:38.574910 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-09-19 11:09:38.574921 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-09-19 11:09:38.574931 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2025-09-19 11:09:38.845610 | orchestrator | + osism apply --environment custom facts 2025-09-19 11:09:40.689672 | orchestrator | 2025-09-19 11:09:40 | INFO  | Trying to run play facts in environment custom 2025-09-19 11:09:50.826403 | orchestrator | 2025-09-19 11:09:50 | INFO  | Task 5a1000b6-2cec-40c4-b300-ec8531e3c234 (facts) was prepared for execution. 2025-09-19 11:09:50.826514 | orchestrator | 2025-09-19 11:09:50 | INFO  | It takes a moment until task 5a1000b6-2cec-40c4-b300-ec8531e3c234 (facts) has been started and output is visible here. 2025-09-19 11:10:36.216251 | orchestrator | 2025-09-19 11:10:36.216365 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-19 11:10:36.216382 | orchestrator | 2025-09-19 11:10:36.216394 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 11:10:36.216405 | orchestrator | Friday 19 September 2025 11:09:54 +0000 (0:00:00.086) 0:00:00.086 ****** 2025-09-19 11:10:36.216417 | orchestrator | ok: [testbed-manager] 2025-09-19 11:10:36.216429 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:10:36.216441 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:10:36.216452 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:10:36.216463 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:10:36.216474 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:10:36.216484 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:10:36.216495 | orchestrator | 2025-09-19 11:10:36.216505 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-19 11:10:36.216516 | orchestrator | Friday 19 September 2025 11:09:55 +0000 (0:00:01.397) 0:00:01.483 ****** 2025-09-19 11:10:36.216527 | orchestrator | ok: [testbed-manager] 2025-09-19 11:10:36.216537 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:10:36.216548 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:10:36.216558 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:10:36.216569 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:10:36.216589 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:10:36.216682 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:10:36.216701 | orchestrator | 2025-09-19 11:10:36.216719 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-19 11:10:36.216739 | orchestrator | 2025-09-19 11:10:36.216758 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 11:10:36.216776 | orchestrator | Friday 19 September 2025 11:09:57 +0000 (0:00:01.215) 0:00:02.699 ****** 2025-09-19 11:10:36.216789 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:10:36.216801 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:10:36.216814 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:10:36.216826 | orchestrator | 2025-09-19 11:10:36.216838 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 11:10:36.216851 | orchestrator | Friday 19 September 2025 11:09:57 +0000 (0:00:00.107) 0:00:02.807 ****** 2025-09-19 11:10:36.216863 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:10:36.216874 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:10:36.216886 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:10:36.216897 | orchestrator | 2025-09-19 11:10:36.216910 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 11:10:36.216922 | orchestrator | Friday 19 September 2025 11:09:57 +0000 (0:00:00.218) 0:00:03.025 ****** 2025-09-19 11:10:36.216935 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:10:36.216946 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:10:36.216960 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:10:36.216972 | orchestrator | 2025-09-19 11:10:36.216984 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 11:10:36.216996 | orchestrator | Friday 19 September 2025 11:09:57 +0000 (0:00:00.195) 0:00:03.221 ****** 2025-09-19 11:10:36.217009 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:10:36.217023 | orchestrator | 2025-09-19 11:10:36.217035 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 11:10:36.217048 | orchestrator | Friday 19 September 2025 11:09:57 +0000 (0:00:00.139) 0:00:03.360 ****** 2025-09-19 11:10:36.217085 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:10:36.217098 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:10:36.217110 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:10:36.217122 | orchestrator | 2025-09-19 11:10:36.217134 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 11:10:36.217146 | orchestrator | Friday 19 September 2025 11:09:58 +0000 (0:00:00.465) 0:00:03.826 ****** 2025-09-19 11:10:36.217156 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:10:36.217167 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:10:36.217177 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:10:36.217188 | orchestrator | 2025-09-19 11:10:36.217198 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 11:10:36.217209 | orchestrator | Friday 19 September 2025 11:09:58 +0000 (0:00:00.101) 0:00:03.927 ****** 2025-09-19 11:10:36.217219 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:10:36.217229 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:10:36.217240 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:10:36.217250 | orchestrator | 2025-09-19 11:10:36.217261 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 11:10:36.217271 | orchestrator | Friday 19 September 2025 11:09:59 +0000 (0:00:01.086) 0:00:05.013 ****** 2025-09-19 11:10:36.217282 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:10:36.217292 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:10:36.217303 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:10:36.217313 | orchestrator | 2025-09-19 11:10:36.217324 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 11:10:36.217335 | orchestrator | Friday 19 September 2025 11:09:59 +0000 (0:00:00.466) 0:00:05.480 ****** 2025-09-19 11:10:36.217345 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:10:36.217356 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:10:36.217366 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:10:36.217377 | orchestrator | 2025-09-19 11:10:36.217387 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 11:10:36.217398 | orchestrator | Friday 19 September 2025 11:10:01 +0000 (0:00:01.164) 0:00:06.644 ****** 2025-09-19 11:10:36.217409 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:10:36.217419 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:10:36.217429 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:10:36.217440 | orchestrator | 2025-09-19 11:10:36.217450 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-19 11:10:36.217478 | orchestrator | Friday 19 September 2025 11:10:19 +0000 (0:00:18.215) 0:00:24.860 ****** 2025-09-19 11:10:36.217490 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:10:36.217505 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:10:36.217516 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:10:36.217526 | orchestrator | 2025-09-19 11:10:36.217537 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-19 11:10:36.217566 | orchestrator | Friday 19 September 2025 11:10:19 +0000 (0:00:00.109) 0:00:24.969 ****** 2025-09-19 11:10:36.217582 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:10:36.217630 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:10:36.217649 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:10:36.217666 | orchestrator | 2025-09-19 11:10:36.217683 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 11:10:36.217703 | orchestrator | Friday 19 September 2025 11:10:27 +0000 (0:00:07.543) 0:00:32.512 ****** 2025-09-19 11:10:36.217721 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:10:36.217740 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:10:36.217754 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:10:36.217765 | orchestrator | 2025-09-19 11:10:36.217776 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-19 11:10:36.217786 | orchestrator | Friday 19 September 2025 11:10:27 +0000 (0:00:00.446) 0:00:32.959 ****** 2025-09-19 11:10:36.217796 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-19 11:10:36.217821 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-19 11:10:36.217831 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-19 11:10:36.217842 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-19 11:10:36.217853 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-19 11:10:36.217863 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-19 11:10:36.217873 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-19 11:10:36.217884 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-19 11:10:36.217894 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-19 11:10:36.217905 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-19 11:10:36.217915 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-19 11:10:36.217925 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-19 11:10:36.217936 | orchestrator | 2025-09-19 11:10:36.217946 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 11:10:36.217957 | orchestrator | Friday 19 September 2025 11:10:31 +0000 (0:00:03.683) 0:00:36.643 ****** 2025-09-19 11:10:36.217967 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:10:36.217978 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:10:36.217988 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:10:36.217999 | orchestrator | 2025-09-19 11:10:36.218009 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 11:10:36.218081 | orchestrator | 2025-09-19 11:10:36.218093 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 11:10:36.218104 | orchestrator | Friday 19 September 2025 11:10:32 +0000 (0:00:01.250) 0:00:37.893 ****** 2025-09-19 11:10:36.218114 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:10:36.218125 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:10:36.218135 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:10:36.218146 | orchestrator | ok: [testbed-manager] 2025-09-19 11:10:36.218156 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:10:36.218167 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:10:36.218177 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:10:36.218187 | orchestrator | 2025-09-19 11:10:36.218198 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:10:36.218210 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:10:36.218221 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:10:36.218233 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:10:36.218244 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:10:36.218255 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:10:36.218266 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:10:36.218276 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:10:36.218287 | orchestrator | 2025-09-19 11:10:36.218297 | orchestrator | 2025-09-19 11:10:36.218308 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:10:36.218319 | orchestrator | Friday 19 September 2025 11:10:36 +0000 (0:00:03.786) 0:00:41.680 ****** 2025-09-19 11:10:36.218329 | orchestrator | =============================================================================== 2025-09-19 11:10:36.218348 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.22s 2025-09-19 11:10:36.218359 | orchestrator | Install required packages (Debian) -------------------------------------- 7.54s 2025-09-19 11:10:36.218369 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.79s 2025-09-19 11:10:36.218380 | orchestrator | Copy fact files --------------------------------------------------------- 3.68s 2025-09-19 11:10:36.218396 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2025-09-19 11:10:36.218407 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.25s 2025-09-19 11:10:36.218428 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2025-09-19 11:10:36.478889 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.16s 2025-09-19 11:10:36.478984 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.09s 2025-09-19 11:10:36.478998 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-09-19 11:10:36.479010 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2025-09-19 11:10:36.479021 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2025-09-19 11:10:36.479031 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2025-09-19 11:10:36.479042 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2025-09-19 11:10:36.479052 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-09-19 11:10:36.479064 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-09-19 11:10:36.479074 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-09-19 11:10:36.479085 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-09-19 11:10:36.749574 | orchestrator | + osism apply bootstrap 2025-09-19 11:10:48.904383 | orchestrator | 2025-09-19 11:10:48 | INFO  | Task 641f54ab-b866-4a46-ac0e-aa221cc02a90 (bootstrap) was prepared for execution. 2025-09-19 11:10:48.904483 | orchestrator | 2025-09-19 11:10:48 | INFO  | It takes a moment until task 641f54ab-b866-4a46-ac0e-aa221cc02a90 (bootstrap) has been started and output is visible here. 2025-09-19 11:11:04.821744 | orchestrator | 2025-09-19 11:11:04.821856 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-19 11:11:04.821872 | orchestrator | 2025-09-19 11:11:04.821884 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-19 11:11:04.821895 | orchestrator | Friday 19 September 2025 11:10:53 +0000 (0:00:00.171) 0:00:00.171 ****** 2025-09-19 11:11:04.821906 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:04.821918 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:04.821929 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:04.821940 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:04.821950 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:04.821961 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:04.821972 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:04.821982 | orchestrator | 2025-09-19 11:11:04.821993 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 11:11:04.822004 | orchestrator | 2025-09-19 11:11:04.822072 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 11:11:04.822085 | orchestrator | Friday 19 September 2025 11:10:53 +0000 (0:00:00.224) 0:00:00.396 ****** 2025-09-19 11:11:04.822096 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:04.822107 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:04.822117 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:04.822137 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:04.822147 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:04.822158 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:04.822168 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:04.822204 | orchestrator | 2025-09-19 11:11:04.822216 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-19 11:11:04.822226 | orchestrator | 2025-09-19 11:11:04.822237 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 11:11:04.822247 | orchestrator | Friday 19 September 2025 11:10:56 +0000 (0:00:03.656) 0:00:04.052 ****** 2025-09-19 11:11:04.822259 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-19 11:11:04.822270 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-19 11:11:04.822280 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 11:11:04.822291 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 11:11:04.822301 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-19 11:11:04.822312 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 11:11:04.822322 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 11:11:04.822333 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-19 11:11:04.822343 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-19 11:11:04.822354 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-19 11:11:04.822364 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-19 11:11:04.822375 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-19 11:11:04.822386 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-19 11:11:04.822397 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-19 11:11:04.822408 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-19 11:11:04.822418 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-19 11:11:04.822428 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-19 11:11:04.822439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-19 11:11:04.822449 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-19 11:11:04.822460 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-19 11:11:04.822470 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:11:04.822481 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 11:11:04.822491 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-19 11:11:04.822502 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-19 11:11:04.822512 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-19 11:11:04.822526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 11:11:04.822546 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-19 11:11:04.822590 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 11:11:04.822608 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-19 11:11:04.822626 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-19 11:11:04.822643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 11:11:04.822661 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-19 11:11:04.822679 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-19 11:11:04.822699 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:11:04.822718 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 11:11:04.822736 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-19 11:11:04.822751 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:11:04.822762 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 11:11:04.822772 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 11:11:04.822783 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:11:04.822811 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-19 11:11:04.822833 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:11:04.822843 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 11:11:04.822855 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-19 11:11:04.822865 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-19 11:11:04.822876 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:11:04.822906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:11:04.822917 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 11:11:04.822927 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:11:04.822938 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-19 11:11:04.822949 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-19 11:11:04.822959 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-19 11:11:04.822970 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:11:04.822980 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-19 11:11:04.822990 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-19 11:11:04.823001 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:11:04.823011 | orchestrator | 2025-09-19 11:11:04.823022 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-19 11:11:04.823033 | orchestrator | 2025-09-19 11:11:04.823043 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-19 11:11:04.823054 | orchestrator | Friday 19 September 2025 11:10:57 +0000 (0:00:00.468) 0:00:04.520 ****** 2025-09-19 11:11:04.823064 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:04.823075 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:04.823085 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:04.823096 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:04.823106 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:04.823117 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:04.823127 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:04.823137 | orchestrator | 2025-09-19 11:11:04.823154 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-19 11:11:04.823179 | orchestrator | Friday 19 September 2025 11:10:58 +0000 (0:00:01.480) 0:00:06.001 ****** 2025-09-19 11:11:04.823201 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:04.823220 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:04.823237 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:04.823255 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:04.823270 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:04.823287 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:04.823304 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:04.823320 | orchestrator | 2025-09-19 11:11:04.823337 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-19 11:11:04.823355 | orchestrator | Friday 19 September 2025 11:11:00 +0000 (0:00:01.281) 0:00:07.282 ****** 2025-09-19 11:11:04.823374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:11:04.823396 | orchestrator | 2025-09-19 11:11:04.823414 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-19 11:11:04.823432 | orchestrator | Friday 19 September 2025 11:11:00 +0000 (0:00:00.240) 0:00:07.523 ****** 2025-09-19 11:11:04.823449 | orchestrator | changed: [testbed-manager] 2025-09-19 11:11:04.823468 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:11:04.823487 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:11:04.823505 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:11:04.823521 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:11:04.823531 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:11:04.823542 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:11:04.823553 | orchestrator | 2025-09-19 11:11:04.823616 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-19 11:11:04.823635 | orchestrator | Friday 19 September 2025 11:11:02 +0000 (0:00:01.995) 0:00:09.518 ****** 2025-09-19 11:11:04.823653 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:11:04.823672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:11:04.823693 | orchestrator | 2025-09-19 11:11:04.823722 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-19 11:11:04.823740 | orchestrator | Friday 19 September 2025 11:11:02 +0000 (0:00:00.237) 0:00:09.756 ****** 2025-09-19 11:11:04.823758 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:11:04.823777 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:11:04.823797 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:11:04.823816 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:11:04.823834 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:11:04.823854 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:11:04.823872 | orchestrator | 2025-09-19 11:11:04.823892 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-19 11:11:04.823909 | orchestrator | Friday 19 September 2025 11:11:03 +0000 (0:00:01.060) 0:00:10.817 ****** 2025-09-19 11:11:04.823927 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:11:04.823946 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:11:04.823964 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:11:04.823982 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:11:04.824000 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:11:04.824017 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:11:04.824036 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:11:04.824054 | orchestrator | 2025-09-19 11:11:04.824073 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-19 11:11:04.824093 | orchestrator | Friday 19 September 2025 11:11:04 +0000 (0:00:00.560) 0:00:11.377 ****** 2025-09-19 11:11:04.824111 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:11:04.824124 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:11:04.824135 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:11:04.824146 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:11:04.824156 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:11:04.824166 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:11:04.824177 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:04.824187 | orchestrator | 2025-09-19 11:11:04.824199 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-19 11:11:04.824211 | orchestrator | Friday 19 September 2025 11:11:04 +0000 (0:00:00.424) 0:00:11.802 ****** 2025-09-19 11:11:04.824221 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:11:04.824232 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:11:04.824258 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:11:18.035436 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:11:18.035608 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:11:18.035635 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:11:18.035703 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:11:18.035726 | orchestrator | 2025-09-19 11:11:18.035740 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-19 11:11:18.035752 | orchestrator | Friday 19 September 2025 11:11:04 +0000 (0:00:00.213) 0:00:12.015 ****** 2025-09-19 11:11:18.035765 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:11:18.035795 | orchestrator | 2025-09-19 11:11:18.035807 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-19 11:11:18.035818 | orchestrator | Friday 19 September 2025 11:11:05 +0000 (0:00:00.280) 0:00:12.296 ****** 2025-09-19 11:11:18.035855 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:11:18.035866 | orchestrator | 2025-09-19 11:11:18.035877 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-19 11:11:18.035888 | orchestrator | Friday 19 September 2025 11:11:05 +0000 (0:00:00.310) 0:00:12.606 ****** 2025-09-19 11:11:18.035899 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:18.035911 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:18.035921 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:18.035932 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:18.035942 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:18.035953 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:18.035963 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:18.035975 | orchestrator | 2025-09-19 11:11:18.035987 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-19 11:11:18.036000 | orchestrator | Friday 19 September 2025 11:11:07 +0000 (0:00:01.599) 0:00:14.205 ****** 2025-09-19 11:11:18.036013 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:11:18.036025 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:11:18.036037 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:11:18.036049 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:11:18.036061 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:11:18.036072 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:11:18.036084 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:11:18.036097 | orchestrator | 2025-09-19 11:11:18.036110 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-19 11:11:18.036122 | orchestrator | Friday 19 September 2025 11:11:07 +0000 (0:00:00.241) 0:00:14.446 ****** 2025-09-19 11:11:18.036136 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:18.036155 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:18.036172 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:18.036188 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:18.036205 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:18.036222 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:18.036239 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:18.036256 | orchestrator | 2025-09-19 11:11:18.036274 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-19 11:11:18.036294 | orchestrator | Friday 19 September 2025 11:11:08 +0000 (0:00:01.433) 0:00:15.880 ****** 2025-09-19 11:11:18.036311 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:11:18.036329 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:11:18.036348 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:11:18.036365 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:11:18.036385 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:11:18.036402 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:11:18.036420 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:11:18.036436 | orchestrator | 2025-09-19 11:11:18.036448 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-19 11:11:18.036459 | orchestrator | Friday 19 September 2025 11:11:09 +0000 (0:00:00.279) 0:00:16.159 ****** 2025-09-19 11:11:18.036470 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:18.036481 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:11:18.036491 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:11:18.036502 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:11:18.036512 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:11:18.036523 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:11:18.036533 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:11:18.036567 | orchestrator | 2025-09-19 11:11:18.036578 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-19 11:11:18.036589 | orchestrator | Friday 19 September 2025 11:11:09 +0000 (0:00:00.648) 0:00:16.808 ****** 2025-09-19 11:11:18.036613 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:18.036623 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:11:18.036634 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:11:18.036644 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:11:18.036655 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:11:18.036665 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:11:18.036676 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:11:18.036686 | orchestrator | 2025-09-19 11:11:18.036697 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-19 11:11:18.036707 | orchestrator | Friday 19 September 2025 11:11:10 +0000 (0:00:01.061) 0:00:17.869 ****** 2025-09-19 11:11:18.036718 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:18.036728 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:18.036739 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:18.036749 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:18.036760 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:18.036771 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:18.036782 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:18.036792 | orchestrator | 2025-09-19 11:11:18.036803 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-19 11:11:18.036814 | orchestrator | Friday 19 September 2025 11:11:11 +0000 (0:00:01.147) 0:00:19.017 ****** 2025-09-19 11:11:18.036843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:11:18.036855 | orchestrator | 2025-09-19 11:11:18.036866 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-19 11:11:18.036877 | orchestrator | Friday 19 September 2025 11:11:12 +0000 (0:00:00.399) 0:00:19.416 ****** 2025-09-19 11:11:18.036887 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:11:18.036898 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:11:18.036908 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:11:18.036919 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:11:18.036929 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:11:18.036940 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:11:18.036950 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:11:18.036961 | orchestrator | 2025-09-19 11:11:18.036971 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 11:11:18.036982 | orchestrator | Friday 19 September 2025 11:11:13 +0000 (0:00:01.253) 0:00:20.669 ****** 2025-09-19 11:11:18.036993 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:18.037003 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:18.037014 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:18.037024 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:18.037035 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:18.037045 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:18.037056 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:18.037066 | orchestrator | 2025-09-19 11:11:18.037077 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 11:11:18.037088 | orchestrator | Friday 19 September 2025 11:11:13 +0000 (0:00:00.232) 0:00:20.902 ****** 2025-09-19 11:11:18.037098 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:18.037109 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:18.037119 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:18.037130 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:18.037140 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:18.037151 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:18.037161 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:18.037172 | orchestrator | 2025-09-19 11:11:18.037183 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 11:11:18.037193 | orchestrator | Friday 19 September 2025 11:11:14 +0000 (0:00:00.258) 0:00:21.161 ****** 2025-09-19 11:11:18.037204 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:18.037214 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:18.037233 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:18.037243 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:18.037254 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:18.037264 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:18.037274 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:18.037285 | orchestrator | 2025-09-19 11:11:18.037296 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 11:11:18.037357 | orchestrator | Friday 19 September 2025 11:11:14 +0000 (0:00:00.226) 0:00:21.387 ****** 2025-09-19 11:11:18.037380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:11:18.037398 | orchestrator | 2025-09-19 11:11:18.037415 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 11:11:18.037433 | orchestrator | Friday 19 September 2025 11:11:14 +0000 (0:00:00.276) 0:00:21.664 ****** 2025-09-19 11:11:18.037452 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:18.037472 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:18.037491 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:18.037502 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:18.037512 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:18.037523 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:18.037533 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:18.037564 | orchestrator | 2025-09-19 11:11:18.037581 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 11:11:18.037592 | orchestrator | Friday 19 September 2025 11:11:15 +0000 (0:00:00.530) 0:00:22.195 ****** 2025-09-19 11:11:18.037603 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:11:18.037613 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:11:18.037624 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:11:18.037635 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:11:18.037645 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:11:18.037656 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:11:18.037666 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:11:18.037677 | orchestrator | 2025-09-19 11:11:18.037688 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 11:11:18.037698 | orchestrator | Friday 19 September 2025 11:11:15 +0000 (0:00:00.222) 0:00:22.417 ****** 2025-09-19 11:11:18.037709 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:18.037724 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:11:18.037744 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:18.037763 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:11:18.037782 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:11:18.037802 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:18.037822 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:18.037844 | orchestrator | 2025-09-19 11:11:18.037865 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 11:11:18.037886 | orchestrator | Friday 19 September 2025 11:11:16 +0000 (0:00:01.091) 0:00:23.508 ****** 2025-09-19 11:11:18.037907 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:18.037927 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:18.037938 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:18.037949 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:18.037959 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:18.037969 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:18.037980 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:18.037990 | orchestrator | 2025-09-19 11:11:18.038001 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 11:11:18.038012 | orchestrator | Friday 19 September 2025 11:11:16 +0000 (0:00:00.574) 0:00:24.083 ****** 2025-09-19 11:11:18.038086 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:18.038097 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:18.038108 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:18.038118 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:11:18.038157 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:06.985117 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:06.985242 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:06.985260 | orchestrator | 2025-09-19 11:12:06.985272 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 11:12:06.985285 | orchestrator | Friday 19 September 2025 11:11:18 +0000 (0:00:01.059) 0:00:25.142 ****** 2025-09-19 11:12:06.985296 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:06.985307 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:06.985318 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:06.985329 | orchestrator | changed: [testbed-manager] 2025-09-19 11:12:06.985340 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:06.985350 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:06.985361 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:06.985372 | orchestrator | 2025-09-19 11:12:06.985382 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-19 11:12:06.985393 | orchestrator | Friday 19 September 2025 11:11:36 +0000 (0:00:18.750) 0:00:43.893 ****** 2025-09-19 11:12:06.985404 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:06.985415 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:12:06.985425 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:12:06.985436 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:12:06.985447 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:06.985457 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:06.985468 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:06.985505 | orchestrator | 2025-09-19 11:12:06.985517 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-19 11:12:06.985528 | orchestrator | Friday 19 September 2025 11:11:36 +0000 (0:00:00.226) 0:00:44.119 ****** 2025-09-19 11:12:06.985539 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:06.985549 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:12:06.985560 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:12:06.985571 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:12:06.985581 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:06.985592 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:06.985602 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:06.985613 | orchestrator | 2025-09-19 11:12:06.985626 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-19 11:12:06.985639 | orchestrator | Friday 19 September 2025 11:11:37 +0000 (0:00:00.266) 0:00:44.386 ****** 2025-09-19 11:12:06.985652 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:06.985664 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:12:06.985677 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:12:06.985689 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:12:06.985701 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:06.985713 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:06.985725 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:06.985737 | orchestrator | 2025-09-19 11:12:06.985750 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-19 11:12:06.985762 | orchestrator | Friday 19 September 2025 11:11:37 +0000 (0:00:00.221) 0:00:44.607 ****** 2025-09-19 11:12:06.985776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:12:06.985792 | orchestrator | 2025-09-19 11:12:06.985806 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-19 11:12:06.985818 | orchestrator | Friday 19 September 2025 11:11:37 +0000 (0:00:00.260) 0:00:44.868 ****** 2025-09-19 11:12:06.985830 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:06.985841 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:12:06.985852 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:06.985862 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:06.985873 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:12:06.985884 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:12:06.985922 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:06.985934 | orchestrator | 2025-09-19 11:12:06.985944 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-19 11:12:06.985971 | orchestrator | Friday 19 September 2025 11:11:39 +0000 (0:00:01.680) 0:00:46.548 ****** 2025-09-19 11:12:06.985991 | orchestrator | changed: [testbed-manager] 2025-09-19 11:12:06.986003 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:06.986072 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:06.986088 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:06.986098 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:06.986109 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:06.986128 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:06.986139 | orchestrator | 2025-09-19 11:12:06.986150 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-19 11:12:06.986161 | orchestrator | Friday 19 September 2025 11:11:40 +0000 (0:00:01.129) 0:00:47.677 ****** 2025-09-19 11:12:06.986172 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:06.986183 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:12:06.986194 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:12:06.986204 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:12:06.986215 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:06.986225 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:06.986236 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:06.986247 | orchestrator | 2025-09-19 11:12:06.986257 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-19 11:12:06.986268 | orchestrator | Friday 19 September 2025 11:11:41 +0000 (0:00:00.861) 0:00:48.539 ****** 2025-09-19 11:12:06.986280 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:12:06.986293 | orchestrator | 2025-09-19 11:12:06.986304 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-19 11:12:06.986315 | orchestrator | Friday 19 September 2025 11:11:41 +0000 (0:00:00.299) 0:00:48.838 ****** 2025-09-19 11:12:06.986326 | orchestrator | changed: [testbed-manager] 2025-09-19 11:12:06.986336 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:06.986347 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:06.986357 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:06.986368 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:06.986379 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:06.986389 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:06.986400 | orchestrator | 2025-09-19 11:12:06.986432 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-19 11:12:06.986444 | orchestrator | Friday 19 September 2025 11:11:42 +0000 (0:00:01.065) 0:00:49.904 ****** 2025-09-19 11:12:06.986454 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:12:06.986465 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:12:06.986512 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:12:06.986524 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:12:06.986535 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:12:06.986545 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:12:06.986556 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:12:06.986566 | orchestrator | 2025-09-19 11:12:06.986577 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-19 11:12:06.986588 | orchestrator | Friday 19 September 2025 11:11:43 +0000 (0:00:00.323) 0:00:50.227 ****** 2025-09-19 11:12:06.986598 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:06.986609 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:06.986619 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:06.986629 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:06.986640 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:06.986650 | orchestrator | changed: [testbed-manager] 2025-09-19 11:12:06.986660 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:06.986682 | orchestrator | 2025-09-19 11:12:06.986693 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-19 11:12:06.986703 | orchestrator | Friday 19 September 2025 11:12:01 +0000 (0:00:18.088) 0:01:08.316 ****** 2025-09-19 11:12:06.986714 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:12:06.986724 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:06.986735 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:12:06.986745 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:06.986756 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:06.986766 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:12:06.986777 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:06.986787 | orchestrator | 2025-09-19 11:12:06.986798 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-19 11:12:06.986809 | orchestrator | Friday 19 September 2025 11:12:02 +0000 (0:00:01.458) 0:01:09.775 ****** 2025-09-19 11:12:06.986820 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:06.986830 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:12:06.986841 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:12:06.986851 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:12:06.986861 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:06.986872 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:06.986882 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:06.986892 | orchestrator | 2025-09-19 11:12:06.986903 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-19 11:12:06.986921 | orchestrator | Friday 19 September 2025 11:12:03 +0000 (0:00:00.875) 0:01:10.650 ****** 2025-09-19 11:12:06.986936 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:06.986947 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:12:06.986957 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:12:06.986967 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:12:06.986978 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:06.986988 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:06.986998 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:06.987009 | orchestrator | 2025-09-19 11:12:06.987020 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-19 11:12:06.987030 | orchestrator | Friday 19 September 2025 11:12:03 +0000 (0:00:00.210) 0:01:10.860 ****** 2025-09-19 11:12:06.987041 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:06.987051 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:12:06.987061 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:12:06.987072 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:12:06.987082 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:06.987092 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:06.987103 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:06.987113 | orchestrator | 2025-09-19 11:12:06.987124 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-19 11:12:06.987134 | orchestrator | Friday 19 September 2025 11:12:03 +0000 (0:00:00.222) 0:01:11.083 ****** 2025-09-19 11:12:06.987146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:12:06.987157 | orchestrator | 2025-09-19 11:12:06.987168 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-19 11:12:06.987178 | orchestrator | Friday 19 September 2025 11:12:04 +0000 (0:00:00.287) 0:01:11.370 ****** 2025-09-19 11:12:06.987189 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:06.987199 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:12:06.987210 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:12:06.987220 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:06.987231 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:06.987241 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:06.987252 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:12:06.987262 | orchestrator | 2025-09-19 11:12:06.987273 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-19 11:12:06.987290 | orchestrator | Friday 19 September 2025 11:12:06 +0000 (0:00:01.808) 0:01:13.179 ****** 2025-09-19 11:12:06.987301 | orchestrator | changed: [testbed-manager] 2025-09-19 11:12:06.987312 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:06.987323 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:06.987333 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:06.987343 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:06.987354 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:06.987364 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:06.987375 | orchestrator | 2025-09-19 11:12:06.987385 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-19 11:12:06.987396 | orchestrator | Friday 19 September 2025 11:12:06 +0000 (0:00:00.644) 0:01:13.824 ****** 2025-09-19 11:12:06.987406 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:06.987417 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:12:06.987427 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:12:06.987438 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:12:06.987448 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:06.987458 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:06.987469 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:06.987526 | orchestrator | 2025-09-19 11:12:06.987546 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-19 11:14:31.956578 | orchestrator | Friday 19 September 2025 11:12:06 +0000 (0:00:00.267) 0:01:14.091 ****** 2025-09-19 11:14:31.956675 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:31.956687 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:31.956695 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:31.956702 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:31.956710 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:31.956717 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:31.956724 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:31.956731 | orchestrator | 2025-09-19 11:14:31.956740 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-19 11:14:31.956747 | orchestrator | Friday 19 September 2025 11:12:08 +0000 (0:00:01.414) 0:01:15.505 ****** 2025-09-19 11:14:31.956755 | orchestrator | changed: [testbed-manager] 2025-09-19 11:14:31.956763 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:14:31.956770 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:14:31.956777 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:14:31.956784 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:14:31.956791 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:14:31.956798 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:14:31.956805 | orchestrator | 2025-09-19 11:14:31.956813 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-19 11:14:31.956820 | orchestrator | Friday 19 September 2025 11:12:10 +0000 (0:00:02.028) 0:01:17.534 ****** 2025-09-19 11:14:31.956827 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:31.956835 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:31.956842 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:31.956849 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:31.956856 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:31.956863 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:31.956870 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:31.956877 | orchestrator | 2025-09-19 11:14:31.956884 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-19 11:14:31.956891 | orchestrator | Friday 19 September 2025 11:12:13 +0000 (0:00:02.766) 0:01:20.300 ****** 2025-09-19 11:14:31.956898 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:31.956905 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:31.956912 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:31.956919 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:31.956940 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:31.956947 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:31.956954 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:31.956962 | orchestrator | 2025-09-19 11:14:31.956969 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-19 11:14:31.956991 | orchestrator | Friday 19 September 2025 11:12:53 +0000 (0:00:40.520) 0:02:00.821 ****** 2025-09-19 11:14:31.956999 | orchestrator | changed: [testbed-manager] 2025-09-19 11:14:31.957006 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:14:31.957013 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:14:31.957020 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:14:31.957026 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:14:31.957033 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:14:31.957041 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:14:31.957047 | orchestrator | 2025-09-19 11:14:31.957054 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-19 11:14:31.957062 | orchestrator | Friday 19 September 2025 11:14:11 +0000 (0:01:17.395) 0:03:18.216 ****** 2025-09-19 11:14:31.957069 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:31.957076 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:31.957083 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:31.957090 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:31.957097 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:31.957104 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:31.957111 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:31.957118 | orchestrator | 2025-09-19 11:14:31.957125 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-19 11:14:31.957133 | orchestrator | Friday 19 September 2025 11:14:12 +0000 (0:00:01.808) 0:03:20.024 ****** 2025-09-19 11:14:31.957141 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:31.957149 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:31.957157 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:31.957168 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:31.957176 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:31.957184 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:31.957192 | orchestrator | changed: [testbed-manager] 2025-09-19 11:14:31.957200 | orchestrator | 2025-09-19 11:14:31.957207 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-19 11:14:31.957216 | orchestrator | Friday 19 September 2025 11:14:24 +0000 (0:00:11.366) 0:03:31.391 ****** 2025-09-19 11:14:31.957230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-19 11:14:31.957242 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-19 11:14:31.957268 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-19 11:14:31.957282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-19 11:14:31.957297 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-19 11:14:31.957306 | orchestrator | 2025-09-19 11:14:31.957334 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-19 11:14:31.957343 | orchestrator | Friday 19 September 2025 11:14:24 +0000 (0:00:00.438) 0:03:31.829 ****** 2025-09-19 11:14:31.957351 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 11:14:31.957359 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:14:31.957367 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 11:14:31.957375 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 11:14:31.957383 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:14:31.957391 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:14:31.957399 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 11:14:31.957407 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:14:31.957414 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 11:14:31.957422 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 11:14:31.957430 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 11:14:31.957438 | orchestrator | 2025-09-19 11:14:31.957446 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-19 11:14:31.957454 | orchestrator | Friday 19 September 2025 11:14:26 +0000 (0:00:01.603) 0:03:33.433 ****** 2025-09-19 11:14:31.957462 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 11:14:31.957472 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 11:14:31.957480 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 11:14:31.957488 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 11:14:31.957496 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 11:14:31.957507 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 11:14:31.957514 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 11:14:31.957521 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 11:14:31.957528 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 11:14:31.957535 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 11:14:31.957542 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:14:31.957549 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 11:14:31.957556 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 11:14:31.957563 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 11:14:31.957570 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 11:14:31.957577 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 11:14:31.957584 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 11:14:31.957597 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 11:14:31.957604 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 11:14:31.957611 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 11:14:31.957618 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 11:14:31.957630 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 11:14:34.217012 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 11:14:34.217109 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 11:14:34.217124 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 11:14:34.217144 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 11:14:34.217163 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 11:14:34.217184 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 11:14:34.217204 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:14:34.217228 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 11:14:34.217247 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 11:14:34.217268 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 11:14:34.217288 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 11:14:34.217299 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:14:34.217371 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 11:14:34.217384 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 11:14:34.217395 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 11:14:34.217406 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 11:14:34.217416 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 11:14:34.217427 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 11:14:34.217439 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 11:14:34.217450 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 11:14:34.217461 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 11:14:34.217472 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:14:34.217483 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 11:14:34.217493 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 11:14:34.217504 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 11:14:34.217515 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 11:14:34.217526 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 11:14:34.217536 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 11:14:34.217550 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 11:14:34.217587 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 11:14:34.217600 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 11:14:34.217612 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 11:14:34.217624 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 11:14:34.217637 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 11:14:34.217649 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 11:14:34.217662 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 11:14:34.217674 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 11:14:34.217685 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 11:14:34.217698 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 11:14:34.217709 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 11:14:34.217722 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 11:14:34.217734 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 11:14:34.217746 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 11:14:34.217775 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 11:14:34.217788 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 11:14:34.217800 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 11:14:34.217812 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 11:14:34.217823 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 11:14:34.217836 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 11:14:34.217848 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 11:14:34.217860 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 11:14:34.217872 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 11:14:34.217884 | orchestrator | 2025-09-19 11:14:34.217897 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-19 11:14:34.217910 | orchestrator | Friday 19 September 2025 11:14:31 +0000 (0:00:05.629) 0:03:39.063 ****** 2025-09-19 11:14:34.217921 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 11:14:34.217931 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 11:14:34.217942 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 11:14:34.217953 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 11:14:34.217963 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 11:14:34.217974 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 11:14:34.217984 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 11:14:34.217995 | orchestrator | 2025-09-19 11:14:34.218005 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-19 11:14:34.218079 | orchestrator | Friday 19 September 2025 11:14:32 +0000 (0:00:00.636) 0:03:39.699 ****** 2025-09-19 11:14:34.218093 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 11:14:34.218103 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 11:14:34.218114 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 11:14:34.218125 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:14:34.218135 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 11:14:34.218146 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:14:34.218157 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:14:34.218168 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:14:34.218199 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 11:14:34.218228 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 11:14:34.218251 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 11:14:34.218272 | orchestrator | 2025-09-19 11:14:34.218293 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-19 11:14:34.218333 | orchestrator | Friday 19 September 2025 11:14:33 +0000 (0:00:00.617) 0:03:40.317 ****** 2025-09-19 11:14:34.218345 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 11:14:34.218356 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 11:14:34.218367 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:14:34.218377 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 11:14:34.218388 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:14:34.218399 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 11:14:34.218410 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:14:34.218420 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:14:34.218431 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 11:14:34.218442 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 11:14:34.218452 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 11:14:34.218463 | orchestrator | 2025-09-19 11:14:34.218474 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-19 11:14:34.218484 | orchestrator | Friday 19 September 2025 11:14:33 +0000 (0:00:00.683) 0:03:41.000 ****** 2025-09-19 11:14:34.218495 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:14:34.218505 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:14:34.218516 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:14:34.218527 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:14:34.218538 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:14:34.218557 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:14:46.408178 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:14:46.408280 | orchestrator | 2025-09-19 11:14:46.408351 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-19 11:14:46.408366 | orchestrator | Friday 19 September 2025 11:14:34 +0000 (0:00:00.324) 0:03:41.325 ****** 2025-09-19 11:14:46.408377 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:46.408390 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:46.408402 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:46.408413 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:46.408447 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:46.408458 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:46.408469 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:46.408480 | orchestrator | 2025-09-19 11:14:46.408490 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-19 11:14:46.408501 | orchestrator | Friday 19 September 2025 11:14:40 +0000 (0:00:06.162) 0:03:47.488 ****** 2025-09-19 11:14:46.408511 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-19 11:14:46.408522 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-19 11:14:46.408532 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:14:46.408543 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-19 11:14:46.408553 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:14:46.408564 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-19 11:14:46.408574 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:14:46.408585 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-19 11:14:46.408595 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:14:46.408606 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-19 11:14:46.408616 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:14:46.408631 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:14:46.408641 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-19 11:14:46.408652 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:14:46.408662 | orchestrator | 2025-09-19 11:14:46.408673 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-19 11:14:46.408683 | orchestrator | Friday 19 September 2025 11:14:40 +0000 (0:00:00.266) 0:03:47.754 ****** 2025-09-19 11:14:46.408694 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-19 11:14:46.408704 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-19 11:14:46.408715 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-19 11:14:46.408727 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-19 11:14:46.408738 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-19 11:14:46.408750 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-19 11:14:46.408761 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-19 11:14:46.408773 | orchestrator | 2025-09-19 11:14:46.408785 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-19 11:14:46.408797 | orchestrator | Friday 19 September 2025 11:14:41 +0000 (0:00:01.139) 0:03:48.894 ****** 2025-09-19 11:14:46.408811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:14:46.408827 | orchestrator | 2025-09-19 11:14:46.408839 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-19 11:14:46.408852 | orchestrator | Friday 19 September 2025 11:14:42 +0000 (0:00:00.463) 0:03:49.357 ****** 2025-09-19 11:14:46.408863 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:46.408876 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:46.408887 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:46.408899 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:46.408911 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:46.408923 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:46.408935 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:46.408947 | orchestrator | 2025-09-19 11:14:46.408973 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-19 11:14:46.408985 | orchestrator | Friday 19 September 2025 11:14:43 +0000 (0:00:01.216) 0:03:50.574 ****** 2025-09-19 11:14:46.408997 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:46.409009 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:46.409021 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:46.409032 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:46.409044 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:46.409055 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:46.409076 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:46.409088 | orchestrator | 2025-09-19 11:14:46.409098 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-19 11:14:46.409109 | orchestrator | Friday 19 September 2025 11:14:44 +0000 (0:00:00.636) 0:03:51.211 ****** 2025-09-19 11:14:46.409120 | orchestrator | changed: [testbed-manager] 2025-09-19 11:14:46.409131 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:14:46.409141 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:14:46.409151 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:14:46.409162 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:14:46.409172 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:14:46.409182 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:14:46.409193 | orchestrator | 2025-09-19 11:14:46.409203 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-19 11:14:46.409214 | orchestrator | Friday 19 September 2025 11:14:44 +0000 (0:00:00.612) 0:03:51.823 ****** 2025-09-19 11:14:46.409224 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:46.409234 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:46.409245 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:46.409255 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:46.409266 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:46.409276 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:46.409286 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:46.409320 | orchestrator | 2025-09-19 11:14:46.409331 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-19 11:14:46.409342 | orchestrator | Friday 19 September 2025 11:14:45 +0000 (0:00:00.711) 0:03:52.535 ****** 2025-09-19 11:14:46.409373 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758278821.763182, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:14:46.409388 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758278843.7307918, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:14:46.409400 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758278851.3976738, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:14:46.409411 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758278858.8587956, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:14:46.409428 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758278846.5757704, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:14:46.409448 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758278857.8226118, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:14:46.409459 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758278845.924437, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:14:46.409487 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:15:01.611602 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:15:01.611703 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:15:01.611719 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:15:01.611751 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:15:01.611764 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:15:01.611775 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:15:01.611787 | orchestrator | 2025-09-19 11:15:01.611800 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-19 11:15:01.611812 | orchestrator | Friday 19 September 2025 11:14:46 +0000 (0:00:00.978) 0:03:53.514 ****** 2025-09-19 11:15:01.611824 | orchestrator | changed: [testbed-manager] 2025-09-19 11:15:01.611835 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:15:01.611846 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:15:01.611856 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:15:01.611867 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:15:01.611877 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:15:01.611888 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:15:01.611899 | orchestrator | 2025-09-19 11:15:01.611910 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-19 11:15:01.611920 | orchestrator | Friday 19 September 2025 11:14:47 +0000 (0:00:01.034) 0:03:54.549 ****** 2025-09-19 11:15:01.611931 | orchestrator | changed: [testbed-manager] 2025-09-19 11:15:01.611942 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:15:01.611952 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:15:01.611963 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:15:01.611986 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:15:01.612006 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:15:01.612019 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:15:01.612030 | orchestrator | 2025-09-19 11:15:01.612041 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-19 11:15:01.612051 | orchestrator | Friday 19 September 2025 11:14:48 +0000 (0:00:01.165) 0:03:55.714 ****** 2025-09-19 11:15:01.612062 | orchestrator | changed: [testbed-manager] 2025-09-19 11:15:01.612072 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:15:01.612083 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:15:01.612093 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:15:01.612104 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:15:01.612114 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:15:01.612125 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:15:01.612135 | orchestrator | 2025-09-19 11:15:01.612146 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-19 11:15:01.612159 | orchestrator | Friday 19 September 2025 11:14:49 +0000 (0:00:01.168) 0:03:56.883 ****** 2025-09-19 11:15:01.612180 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:15:01.612193 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:15:01.612205 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:15:01.612231 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:15:01.612243 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:15:01.612255 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:15:01.612267 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:15:01.612350 | orchestrator | 2025-09-19 11:15:01.612370 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-19 11:15:01.612384 | orchestrator | Friday 19 September 2025 11:14:50 +0000 (0:00:00.283) 0:03:57.166 ****** 2025-09-19 11:15:01.612396 | orchestrator | ok: [testbed-manager] 2025-09-19 11:15:01.612409 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:15:01.612425 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:15:01.612445 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:15:01.612462 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:15:01.612480 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:15:01.612499 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:15:01.612516 | orchestrator | 2025-09-19 11:15:01.612535 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-19 11:15:01.612555 | orchestrator | Friday 19 September 2025 11:14:50 +0000 (0:00:00.624) 0:03:57.790 ****** 2025-09-19 11:15:01.612576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:15:01.612594 | orchestrator | 2025-09-19 11:15:01.612605 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-19 11:15:01.612616 | orchestrator | Friday 19 September 2025 11:14:51 +0000 (0:00:00.363) 0:03:58.154 ****** 2025-09-19 11:15:01.612626 | orchestrator | ok: [testbed-manager] 2025-09-19 11:15:01.612637 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:15:01.612647 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:15:01.612658 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:15:01.612674 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:15:01.612692 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:15:01.612711 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:15:01.612730 | orchestrator | 2025-09-19 11:15:01.612749 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-19 11:15:01.612768 | orchestrator | Friday 19 September 2025 11:14:58 +0000 (0:00:07.428) 0:04:05.582 ****** 2025-09-19 11:15:01.612795 | orchestrator | ok: [testbed-manager] 2025-09-19 11:15:01.612816 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:15:01.612835 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:15:01.612851 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:15:01.612862 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:15:01.612872 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:15:01.612883 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:15:01.612894 | orchestrator | 2025-09-19 11:15:01.612905 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-19 11:15:01.612916 | orchestrator | Friday 19 September 2025 11:14:59 +0000 (0:00:01.185) 0:04:06.768 ****** 2025-09-19 11:15:01.612926 | orchestrator | ok: [testbed-manager] 2025-09-19 11:15:01.612937 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:15:01.612947 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:15:01.612958 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:15:01.612968 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:15:01.612978 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:15:01.612988 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:15:01.612999 | orchestrator | 2025-09-19 11:15:01.613010 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-19 11:15:01.613020 | orchestrator | Friday 19 September 2025 11:15:00 +0000 (0:00:00.997) 0:04:07.766 ****** 2025-09-19 11:15:01.613031 | orchestrator | ok: [testbed-manager] 2025-09-19 11:15:01.613052 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:15:01.613063 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:15:01.613073 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:15:01.613083 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:15:01.613094 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:15:01.613104 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:15:01.613115 | orchestrator | 2025-09-19 11:15:01.613125 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-19 11:15:01.613137 | orchestrator | Friday 19 September 2025 11:15:00 +0000 (0:00:00.318) 0:04:08.084 ****** 2025-09-19 11:15:01.613148 | orchestrator | ok: [testbed-manager] 2025-09-19 11:15:01.613158 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:15:01.613168 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:15:01.613179 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:15:01.613189 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:15:01.613200 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:15:01.613210 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:15:01.613220 | orchestrator | 2025-09-19 11:15:01.613231 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-19 11:15:01.613242 | orchestrator | Friday 19 September 2025 11:15:01 +0000 (0:00:00.377) 0:04:08.461 ****** 2025-09-19 11:15:01.613252 | orchestrator | ok: [testbed-manager] 2025-09-19 11:15:01.613263 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:15:01.613273 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:15:01.613320 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:15:01.613332 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:15:01.613353 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:16:08.335741 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:16:08.335880 | orchestrator | 2025-09-19 11:16:08.335895 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-19 11:16:08.335908 | orchestrator | Friday 19 September 2025 11:15:01 +0000 (0:00:00.261) 0:04:08.723 ****** 2025-09-19 11:16:08.335918 | orchestrator | ok: [testbed-manager] 2025-09-19 11:16:08.335928 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:16:08.335938 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:16:08.335947 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:16:08.335957 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:16:08.335967 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:16:08.335976 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:16:08.335985 | orchestrator | 2025-09-19 11:16:08.335996 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-19 11:16:08.336006 | orchestrator | Friday 19 September 2025 11:15:07 +0000 (0:00:05.457) 0:04:14.180 ****** 2025-09-19 11:16:08.336018 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:16:08.336031 | orchestrator | 2025-09-19 11:16:08.336040 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-19 11:16:08.336050 | orchestrator | Friday 19 September 2025 11:15:07 +0000 (0:00:00.345) 0:04:14.526 ****** 2025-09-19 11:16:08.336060 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-19 11:16:08.336070 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-19 11:16:08.336080 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-19 11:16:08.336090 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-19 11:16:08.336100 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:16:08.336110 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-19 11:16:08.336119 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:16:08.336129 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-19 11:16:08.336138 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-19 11:16:08.336147 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-19 11:16:08.336157 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:16:08.336192 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:16:08.336203 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-19 11:16:08.336238 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-19 11:16:08.336259 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:16:08.336276 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-19 11:16:08.336293 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-19 11:16:08.336310 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:16:08.336326 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-19 11:16:08.336342 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-19 11:16:08.336358 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:16:08.336375 | orchestrator | 2025-09-19 11:16:08.336391 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-19 11:16:08.336408 | orchestrator | Friday 19 September 2025 11:15:07 +0000 (0:00:00.272) 0:04:14.799 ****** 2025-09-19 11:16:08.336448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:16:08.336461 | orchestrator | 2025-09-19 11:16:08.336471 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-19 11:16:08.336482 | orchestrator | Friday 19 September 2025 11:15:08 +0000 (0:00:00.368) 0:04:15.167 ****** 2025-09-19 11:16:08.336491 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-19 11:16:08.336501 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:16:08.336510 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-19 11:16:08.336520 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-19 11:16:08.336529 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:16:08.336538 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-19 11:16:08.336548 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:16:08.336557 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-19 11:16:08.336567 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:16:08.336576 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-19 11:16:08.336585 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:16:08.336595 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:16:08.336604 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-19 11:16:08.336614 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:16:08.336623 | orchestrator | 2025-09-19 11:16:08.336632 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-19 11:16:08.336642 | orchestrator | Friday 19 September 2025 11:15:08 +0000 (0:00:00.229) 0:04:15.397 ****** 2025-09-19 11:16:08.336651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:16:08.336661 | orchestrator | 2025-09-19 11:16:08.336670 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-19 11:16:08.336680 | orchestrator | Friday 19 September 2025 11:15:08 +0000 (0:00:00.342) 0:04:15.739 ****** 2025-09-19 11:16:08.336689 | orchestrator | changed: [testbed-manager] 2025-09-19 11:16:08.336718 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:16:08.336728 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:16:08.336737 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:16:08.336747 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:16:08.336756 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:16:08.336765 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:16:08.336775 | orchestrator | 2025-09-19 11:16:08.336796 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-19 11:16:08.336805 | orchestrator | Friday 19 September 2025 11:15:41 +0000 (0:00:32.808) 0:04:48.547 ****** 2025-09-19 11:16:08.336815 | orchestrator | changed: [testbed-manager] 2025-09-19 11:16:08.336824 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:16:08.336834 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:16:08.336843 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:16:08.336852 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:16:08.336861 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:16:08.336871 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:16:08.336880 | orchestrator | 2025-09-19 11:16:08.336890 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-19 11:16:08.336899 | orchestrator | Friday 19 September 2025 11:15:49 +0000 (0:00:07.596) 0:04:56.144 ****** 2025-09-19 11:16:08.336908 | orchestrator | changed: [testbed-manager] 2025-09-19 11:16:08.336918 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:16:08.336927 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:16:08.336936 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:16:08.336946 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:16:08.336955 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:16:08.336964 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:16:08.336973 | orchestrator | 2025-09-19 11:16:08.336983 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-19 11:16:08.336992 | orchestrator | Friday 19 September 2025 11:15:56 +0000 (0:00:07.321) 0:05:03.465 ****** 2025-09-19 11:16:08.337002 | orchestrator | ok: [testbed-manager] 2025-09-19 11:16:08.337013 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:16:08.337030 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:16:08.337054 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:16:08.337069 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:16:08.337085 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:16:08.337099 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:16:08.337114 | orchestrator | 2025-09-19 11:16:08.337130 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-19 11:16:08.337145 | orchestrator | Friday 19 September 2025 11:15:57 +0000 (0:00:01.634) 0:05:05.100 ****** 2025-09-19 11:16:08.337160 | orchestrator | changed: [testbed-manager] 2025-09-19 11:16:08.337177 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:16:08.337192 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:16:08.337208 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:16:08.337248 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:16:08.337265 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:16:08.337279 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:16:08.337289 | orchestrator | 2025-09-19 11:16:08.337298 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-19 11:16:08.337308 | orchestrator | Friday 19 September 2025 11:16:03 +0000 (0:00:05.591) 0:05:10.691 ****** 2025-09-19 11:16:08.337318 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:16:08.337331 | orchestrator | 2025-09-19 11:16:08.337347 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-19 11:16:08.337357 | orchestrator | Friday 19 September 2025 11:16:04 +0000 (0:00:00.497) 0:05:11.189 ****** 2025-09-19 11:16:08.337366 | orchestrator | changed: [testbed-manager] 2025-09-19 11:16:08.337376 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:16:08.337385 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:16:08.337394 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:16:08.337404 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:16:08.337413 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:16:08.337422 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:16:08.337431 | orchestrator | 2025-09-19 11:16:08.337441 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-19 11:16:08.337461 | orchestrator | Friday 19 September 2025 11:16:04 +0000 (0:00:00.786) 0:05:11.976 ****** 2025-09-19 11:16:08.337471 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:16:08.337480 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:16:08.337489 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:16:08.337499 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:16:08.337508 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:16:08.337517 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:16:08.337527 | orchestrator | ok: [testbed-manager] 2025-09-19 11:16:08.337536 | orchestrator | 2025-09-19 11:16:08.337546 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-19 11:16:08.337555 | orchestrator | Friday 19 September 2025 11:16:07 +0000 (0:00:02.303) 0:05:14.280 ****** 2025-09-19 11:16:08.337565 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:16:08.337574 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:16:08.337583 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:16:08.337592 | orchestrator | changed: [testbed-manager] 2025-09-19 11:16:08.337602 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:16:08.337611 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:16:08.337620 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:16:08.337629 | orchestrator | 2025-09-19 11:16:08.337639 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-19 11:16:08.337648 | orchestrator | Friday 19 September 2025 11:16:08 +0000 (0:00:00.903) 0:05:15.184 ****** 2025-09-19 11:16:08.337658 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:16:08.337667 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:16:08.337676 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:16:08.337686 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:16:08.337695 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:16:08.337704 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:16:08.337714 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:16:08.337723 | orchestrator | 2025-09-19 11:16:08.337732 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-19 11:16:08.337751 | orchestrator | Friday 19 September 2025 11:16:08 +0000 (0:00:00.259) 0:05:15.443 ****** 2025-09-19 11:16:34.526472 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:16:34.526584 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:16:34.526599 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:16:34.526610 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:16:34.526621 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:16:34.526633 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:16:34.526644 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:16:34.526655 | orchestrator | 2025-09-19 11:16:34.526668 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-19 11:16:34.526680 | orchestrator | Friday 19 September 2025 11:16:08 +0000 (0:00:00.413) 0:05:15.856 ****** 2025-09-19 11:16:34.526691 | orchestrator | ok: [testbed-manager] 2025-09-19 11:16:34.526703 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:16:34.526713 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:16:34.526724 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:16:34.526734 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:16:34.526745 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:16:34.526755 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:16:34.526766 | orchestrator | 2025-09-19 11:16:34.526776 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-19 11:16:34.526787 | orchestrator | Friday 19 September 2025 11:16:09 +0000 (0:00:00.278) 0:05:16.135 ****** 2025-09-19 11:16:34.526798 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:16:34.526809 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:16:34.526819 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:16:34.526830 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:16:34.526840 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:16:34.526851 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:16:34.526861 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:16:34.526895 | orchestrator | 2025-09-19 11:16:34.526906 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-19 11:16:34.526918 | orchestrator | Friday 19 September 2025 11:16:09 +0000 (0:00:00.292) 0:05:16.427 ****** 2025-09-19 11:16:34.526928 | orchestrator | ok: [testbed-manager] 2025-09-19 11:16:34.526940 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:16:34.526957 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:16:34.526975 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:16:34.526994 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:16:34.527025 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:16:34.527044 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:16:34.527062 | orchestrator | 2025-09-19 11:16:34.527079 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-19 11:16:34.527096 | orchestrator | Friday 19 September 2025 11:16:09 +0000 (0:00:00.300) 0:05:16.727 ****** 2025-09-19 11:16:34.527112 | orchestrator | ok: [testbed-manager] =>  2025-09-19 11:16:34.527129 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 11:16:34.527146 | orchestrator | ok: [testbed-node-0] =>  2025-09-19 11:16:34.527164 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 11:16:34.527184 | orchestrator | ok: [testbed-node-1] =>  2025-09-19 11:16:34.527230 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 11:16:34.527248 | orchestrator | ok: [testbed-node-2] =>  2025-09-19 11:16:34.527265 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 11:16:34.527282 | orchestrator | ok: [testbed-node-3] =>  2025-09-19 11:16:34.527299 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 11:16:34.527317 | orchestrator | ok: [testbed-node-4] =>  2025-09-19 11:16:34.527335 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 11:16:34.527353 | orchestrator | ok: [testbed-node-5] =>  2025-09-19 11:16:34.527372 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 11:16:34.527389 | orchestrator | 2025-09-19 11:16:34.527407 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-19 11:16:34.527424 | orchestrator | Friday 19 September 2025 11:16:09 +0000 (0:00:00.290) 0:05:17.018 ****** 2025-09-19 11:16:34.527443 | orchestrator | ok: [testbed-manager] =>  2025-09-19 11:16:34.527460 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 11:16:34.527477 | orchestrator | ok: [testbed-node-0] =>  2025-09-19 11:16:34.527497 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 11:16:34.527515 | orchestrator | ok: [testbed-node-1] =>  2025-09-19 11:16:34.527534 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 11:16:34.527545 | orchestrator | ok: [testbed-node-2] =>  2025-09-19 11:16:34.527556 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 11:16:34.527567 | orchestrator | ok: [testbed-node-3] =>  2025-09-19 11:16:34.527577 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 11:16:34.527587 | orchestrator | ok: [testbed-node-4] =>  2025-09-19 11:16:34.527598 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 11:16:34.527608 | orchestrator | ok: [testbed-node-5] =>  2025-09-19 11:16:34.527619 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 11:16:34.527629 | orchestrator | 2025-09-19 11:16:34.527640 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-19 11:16:34.527650 | orchestrator | Friday 19 September 2025 11:16:10 +0000 (0:00:00.272) 0:05:17.291 ****** 2025-09-19 11:16:34.527661 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:16:34.527671 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:16:34.527682 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:16:34.527692 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:16:34.527702 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:16:34.527713 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:16:34.527723 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:16:34.527733 | orchestrator | 2025-09-19 11:16:34.527744 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-19 11:16:34.527754 | orchestrator | Friday 19 September 2025 11:16:10 +0000 (0:00:00.265) 0:05:17.557 ****** 2025-09-19 11:16:34.527765 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:16:34.527790 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:16:34.527800 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:16:34.527811 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:16:34.527821 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:16:34.527832 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:16:34.527842 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:16:34.527852 | orchestrator | 2025-09-19 11:16:34.527863 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-19 11:16:34.527873 | orchestrator | Friday 19 September 2025 11:16:10 +0000 (0:00:00.264) 0:05:17.821 ****** 2025-09-19 11:16:34.527905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:16:34.527920 | orchestrator | 2025-09-19 11:16:34.527931 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-19 11:16:34.527941 | orchestrator | Friday 19 September 2025 11:16:11 +0000 (0:00:00.414) 0:05:18.235 ****** 2025-09-19 11:16:34.527952 | orchestrator | ok: [testbed-manager] 2025-09-19 11:16:34.527962 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:16:34.527973 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:16:34.527983 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:16:34.527994 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:16:34.528004 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:16:34.528015 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:16:34.528025 | orchestrator | 2025-09-19 11:16:34.528036 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-19 11:16:34.528047 | orchestrator | Friday 19 September 2025 11:16:12 +0000 (0:00:00.913) 0:05:19.149 ****** 2025-09-19 11:16:34.528057 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:16:34.528068 | orchestrator | ok: [testbed-manager] 2025-09-19 11:16:34.528078 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:16:34.528088 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:16:34.528098 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:16:34.528109 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:16:34.528119 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:16:34.528129 | orchestrator | 2025-09-19 11:16:34.528140 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-19 11:16:34.528152 | orchestrator | Friday 19 September 2025 11:16:15 +0000 (0:00:03.355) 0:05:22.505 ****** 2025-09-19 11:16:34.528162 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-19 11:16:34.528173 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-19 11:16:34.528184 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-19 11:16:34.528227 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-19 11:16:34.528258 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-19 11:16:34.528269 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-19 11:16:34.528280 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:16:34.528290 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-19 11:16:34.528301 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-19 11:16:34.528311 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-19 11:16:34.528321 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:16:34.528332 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-19 11:16:34.528342 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-19 11:16:34.528353 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-19 11:16:34.528363 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:16:34.528374 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-19 11:16:34.528384 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:16:34.528397 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-19 11:16:34.528430 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-19 11:16:34.528456 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-19 11:16:34.528473 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-19 11:16:34.528490 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-19 11:16:34.528507 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:16:34.528524 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:16:34.528548 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-19 11:16:34.528564 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-19 11:16:34.528579 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-19 11:16:34.528595 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:16:34.528610 | orchestrator | 2025-09-19 11:16:34.528625 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-19 11:16:34.528641 | orchestrator | Friday 19 September 2025 11:16:15 +0000 (0:00:00.558) 0:05:23.064 ****** 2025-09-19 11:16:34.528656 | orchestrator | ok: [testbed-manager] 2025-09-19 11:16:34.528671 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:16:34.528687 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:16:34.528702 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:16:34.528717 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:16:34.528733 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:16:34.528748 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:16:34.528763 | orchestrator | 2025-09-19 11:16:34.528779 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-19 11:16:34.528795 | orchestrator | Friday 19 September 2025 11:16:22 +0000 (0:00:06.364) 0:05:29.428 ****** 2025-09-19 11:16:34.528810 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:16:34.528825 | orchestrator | ok: [testbed-manager] 2025-09-19 11:16:34.528840 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:16:34.528855 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:16:34.528870 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:16:34.528885 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:16:34.528901 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:16:34.528917 | orchestrator | 2025-09-19 11:16:34.528933 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-19 11:16:34.528949 | orchestrator | Friday 19 September 2025 11:16:23 +0000 (0:00:01.194) 0:05:30.623 ****** 2025-09-19 11:16:34.528968 | orchestrator | ok: [testbed-manager] 2025-09-19 11:16:34.528986 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:16:34.529002 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:16:34.529021 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:16:34.529041 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:16:34.529059 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:16:34.529076 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:16:34.529087 | orchestrator | 2025-09-19 11:16:34.529097 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-19 11:16:34.529108 | orchestrator | Friday 19 September 2025 11:16:31 +0000 (0:00:07.754) 0:05:38.377 ****** 2025-09-19 11:16:34.529118 | orchestrator | changed: [testbed-manager] 2025-09-19 11:16:34.529129 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:16:34.529139 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:16:34.529161 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:19.574637 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:19.574738 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:19.574754 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:19.574766 | orchestrator | 2025-09-19 11:17:19.574779 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-19 11:17:19.574791 | orchestrator | Friday 19 September 2025 11:16:34 +0000 (0:00:03.258) 0:05:41.636 ****** 2025-09-19 11:17:19.574802 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:19.574814 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:19.574825 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:17:19.574854 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:19.574864 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:19.574875 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:19.574885 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:19.574896 | orchestrator | 2025-09-19 11:17:19.574914 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-19 11:17:19.574931 | orchestrator | Friday 19 September 2025 11:16:35 +0000 (0:00:01.333) 0:05:42.970 ****** 2025-09-19 11:17:19.574948 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:19.574966 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:17:19.574985 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:19.575003 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:19.575020 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:19.575035 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:19.575045 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:19.575056 | orchestrator | 2025-09-19 11:17:19.575067 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-19 11:17:19.575077 | orchestrator | Friday 19 September 2025 11:16:37 +0000 (0:00:01.353) 0:05:44.323 ****** 2025-09-19 11:17:19.575088 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:17:19.575098 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:17:19.575109 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:17:19.575119 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:17:19.575129 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:17:19.575140 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:17:19.575178 | orchestrator | changed: [testbed-manager] 2025-09-19 11:17:19.575191 | orchestrator | 2025-09-19 11:17:19.575203 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-19 11:17:19.575216 | orchestrator | Friday 19 September 2025 11:16:38 +0000 (0:00:00.936) 0:05:45.260 ****** 2025-09-19 11:17:19.575227 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:19.575240 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:19.575253 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:19.575265 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:17:19.575277 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:19.575289 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:19.575301 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:19.575312 | orchestrator | 2025-09-19 11:17:19.575324 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-19 11:17:19.575337 | orchestrator | Friday 19 September 2025 11:16:47 +0000 (0:00:09.535) 0:05:54.796 ****** 2025-09-19 11:17:19.575349 | orchestrator | changed: [testbed-manager] 2025-09-19 11:17:19.575360 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:19.575373 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:17:19.575384 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:19.575396 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:19.575408 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:19.575420 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:19.575432 | orchestrator | 2025-09-19 11:17:19.575444 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-19 11:17:19.575464 | orchestrator | Friday 19 September 2025 11:16:48 +0000 (0:00:00.937) 0:05:55.733 ****** 2025-09-19 11:17:19.575476 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:19.575489 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:19.575501 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:17:19.575513 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:19.575524 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:19.575537 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:19.575548 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:19.575559 | orchestrator | 2025-09-19 11:17:19.575569 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-19 11:17:19.575580 | orchestrator | Friday 19 September 2025 11:16:58 +0000 (0:00:09.492) 0:06:05.226 ****** 2025-09-19 11:17:19.575599 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:19.575610 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:17:19.575620 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:19.575631 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:19.575641 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:19.575652 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:19.575662 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:19.575672 | orchestrator | 2025-09-19 11:17:19.575683 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-19 11:17:19.575694 | orchestrator | Friday 19 September 2025 11:17:09 +0000 (0:00:11.128) 0:06:16.354 ****** 2025-09-19 11:17:19.575704 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-19 11:17:19.575715 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-19 11:17:19.575725 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-19 11:17:19.575736 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-19 11:17:19.575746 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-19 11:17:19.575757 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-19 11:17:19.575767 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-19 11:17:19.575778 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-19 11:17:19.575788 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-19 11:17:19.575799 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-19 11:17:19.575809 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-19 11:17:19.575820 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-19 11:17:19.575830 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-19 11:17:19.575841 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-19 11:17:19.575851 | orchestrator | 2025-09-19 11:17:19.575862 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-19 11:17:19.575889 | orchestrator | Friday 19 September 2025 11:17:10 +0000 (0:00:01.221) 0:06:17.575 ****** 2025-09-19 11:17:19.575901 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:17:19.575911 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:17:19.575922 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:17:19.575932 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:17:19.575942 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:17:19.575953 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:17:19.575963 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:17:19.575974 | orchestrator | 2025-09-19 11:17:19.575984 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-19 11:17:19.575995 | orchestrator | Friday 19 September 2025 11:17:11 +0000 (0:00:00.557) 0:06:18.133 ****** 2025-09-19 11:17:19.576006 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:19.576016 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:19.576026 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:17:19.576037 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:19.576047 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:19.576058 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:19.576068 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:19.576079 | orchestrator | 2025-09-19 11:17:19.576089 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-19 11:17:19.576101 | orchestrator | Friday 19 September 2025 11:17:15 +0000 (0:00:04.216) 0:06:22.350 ****** 2025-09-19 11:17:19.576111 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:17:19.576122 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:17:19.576132 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:17:19.576143 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:17:19.576193 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:17:19.576204 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:17:19.576215 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:17:19.576233 | orchestrator | 2025-09-19 11:17:19.576244 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-19 11:17:19.576256 | orchestrator | Friday 19 September 2025 11:17:15 +0000 (0:00:00.486) 0:06:22.836 ****** 2025-09-19 11:17:19.576266 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-19 11:17:19.576277 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-19 11:17:19.576287 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:17:19.576298 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-19 11:17:19.576308 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-19 11:17:19.576319 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:17:19.576329 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-19 11:17:19.576340 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-19 11:17:19.576350 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:17:19.576361 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-19 11:17:19.576371 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-19 11:17:19.576382 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:17:19.576392 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-19 11:17:19.576403 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-19 11:17:19.576413 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:17:19.576423 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-19 11:17:19.576439 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-19 11:17:19.576449 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:17:19.576460 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-19 11:17:19.576470 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-19 11:17:19.576481 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:17:19.576491 | orchestrator | 2025-09-19 11:17:19.576502 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-19 11:17:19.576512 | orchestrator | Friday 19 September 2025 11:17:16 +0000 (0:00:00.711) 0:06:23.548 ****** 2025-09-19 11:17:19.576523 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:17:19.576533 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:17:19.576543 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:17:19.576554 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:17:19.576564 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:17:19.576575 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:17:19.576585 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:17:19.576595 | orchestrator | 2025-09-19 11:17:19.576606 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-19 11:17:19.576617 | orchestrator | Friday 19 September 2025 11:17:16 +0000 (0:00:00.524) 0:06:24.072 ****** 2025-09-19 11:17:19.576627 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:17:19.576638 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:17:19.576648 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:17:19.576659 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:17:19.576669 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:17:19.576679 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:17:19.576690 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:17:19.576700 | orchestrator | 2025-09-19 11:17:19.576711 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-19 11:17:19.576721 | orchestrator | Friday 19 September 2025 11:17:17 +0000 (0:00:00.506) 0:06:24.579 ****** 2025-09-19 11:17:19.576732 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:17:19.576743 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:17:19.576753 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:17:19.576763 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:17:19.576774 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:17:19.576791 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:17:19.576802 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:17:19.576812 | orchestrator | 2025-09-19 11:17:19.576823 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-19 11:17:19.576834 | orchestrator | Friday 19 September 2025 11:17:17 +0000 (0:00:00.518) 0:06:25.097 ****** 2025-09-19 11:17:19.576844 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:19.576862 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:17:42.004551 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:17:42.004659 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:17:42.004671 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:17:42.004680 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:17:42.004689 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:17:42.004698 | orchestrator | 2025-09-19 11:17:42.004708 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-19 11:17:42.004718 | orchestrator | Friday 19 September 2025 11:17:19 +0000 (0:00:01.588) 0:06:26.686 ****** 2025-09-19 11:17:42.004728 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:17:42.004740 | orchestrator | 2025-09-19 11:17:42.004749 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-19 11:17:42.004758 | orchestrator | Friday 19 September 2025 11:17:20 +0000 (0:00:01.010) 0:06:27.696 ****** 2025-09-19 11:17:42.004766 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:42.004775 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:42.004785 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:17:42.004793 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:42.004802 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:42.004810 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:42.004819 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:42.004827 | orchestrator | 2025-09-19 11:17:42.004836 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-19 11:17:42.004845 | orchestrator | Friday 19 September 2025 11:17:21 +0000 (0:00:00.808) 0:06:28.504 ****** 2025-09-19 11:17:42.004853 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:42.004862 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:42.004870 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:17:42.004879 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:42.004888 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:42.004897 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:42.004905 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:42.004914 | orchestrator | 2025-09-19 11:17:42.004923 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-19 11:17:42.004931 | orchestrator | Friday 19 September 2025 11:17:22 +0000 (0:00:00.834) 0:06:29.339 ****** 2025-09-19 11:17:42.004940 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:42.004949 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:42.004957 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:17:42.004966 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:42.004975 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:42.004983 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:42.004992 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:42.005000 | orchestrator | 2025-09-19 11:17:42.005009 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-19 11:17:42.005019 | orchestrator | Friday 19 September 2025 11:17:23 +0000 (0:00:01.425) 0:06:30.764 ****** 2025-09-19 11:17:42.005027 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:17:42.005036 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:17:42.005045 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:17:42.005053 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:17:42.005062 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:17:42.005071 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:17:42.005099 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:17:42.005108 | orchestrator | 2025-09-19 11:17:42.005117 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-19 11:17:42.005165 | orchestrator | Friday 19 September 2025 11:17:25 +0000 (0:00:01.589) 0:06:32.354 ****** 2025-09-19 11:17:42.005176 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:42.005184 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:17:42.005193 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:42.005201 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:42.005210 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:42.005218 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:42.005227 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:42.005235 | orchestrator | 2025-09-19 11:17:42.005244 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-19 11:17:42.005253 | orchestrator | Friday 19 September 2025 11:17:26 +0000 (0:00:01.384) 0:06:33.738 ****** 2025-09-19 11:17:42.005261 | orchestrator | changed: [testbed-manager] 2025-09-19 11:17:42.005270 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:42.005278 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:17:42.005287 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:42.005295 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:42.005304 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:42.005312 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:42.005321 | orchestrator | 2025-09-19 11:17:42.005329 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-19 11:17:42.005338 | orchestrator | Friday 19 September 2025 11:17:28 +0000 (0:00:01.437) 0:06:35.175 ****** 2025-09-19 11:17:42.005347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:17:42.005355 | orchestrator | 2025-09-19 11:17:42.005364 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-19 11:17:42.005373 | orchestrator | Friday 19 September 2025 11:17:29 +0000 (0:00:01.086) 0:06:36.262 ****** 2025-09-19 11:17:42.005381 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:42.005390 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:17:42.005399 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:17:42.005407 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:17:42.005416 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:17:42.005425 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:17:42.005433 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:17:42.005441 | orchestrator | 2025-09-19 11:17:42.005450 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-19 11:17:42.005459 | orchestrator | Friday 19 September 2025 11:17:30 +0000 (0:00:01.381) 0:06:37.643 ****** 2025-09-19 11:17:42.005469 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:42.005484 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:17:42.005518 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:17:42.005533 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:17:42.005547 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:17:42.005560 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:17:42.005575 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:17:42.005589 | orchestrator | 2025-09-19 11:17:42.005604 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-19 11:17:42.005618 | orchestrator | Friday 19 September 2025 11:17:31 +0000 (0:00:01.221) 0:06:38.865 ****** 2025-09-19 11:17:42.005633 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:42.005646 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:17:42.005661 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:17:42.005677 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:17:42.005691 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:17:42.005704 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:17:42.005713 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:17:42.005721 | orchestrator | 2025-09-19 11:17:42.005730 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-19 11:17:42.005748 | orchestrator | Friday 19 September 2025 11:17:32 +0000 (0:00:01.212) 0:06:40.078 ****** 2025-09-19 11:17:42.005757 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:42.005765 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:17:42.005773 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:17:42.005782 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:17:42.005790 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:17:42.005798 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:17:42.005807 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:17:42.005815 | orchestrator | 2025-09-19 11:17:42.005824 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-19 11:17:42.005832 | orchestrator | Friday 19 September 2025 11:17:34 +0000 (0:00:01.162) 0:06:41.240 ****** 2025-09-19 11:17:42.005841 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:17:42.005850 | orchestrator | 2025-09-19 11:17:42.005858 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 11:17:42.005867 | orchestrator | Friday 19 September 2025 11:17:35 +0000 (0:00:01.147) 0:06:42.387 ****** 2025-09-19 11:17:42.005875 | orchestrator | 2025-09-19 11:17:42.005883 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 11:17:42.005892 | orchestrator | Friday 19 September 2025 11:17:35 +0000 (0:00:00.041) 0:06:42.429 ****** 2025-09-19 11:17:42.005900 | orchestrator | 2025-09-19 11:17:42.005909 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 11:17:42.005917 | orchestrator | Friday 19 September 2025 11:17:35 +0000 (0:00:00.040) 0:06:42.470 ****** 2025-09-19 11:17:42.005925 | orchestrator | 2025-09-19 11:17:42.005934 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 11:17:42.005942 | orchestrator | Friday 19 September 2025 11:17:35 +0000 (0:00:00.048) 0:06:42.519 ****** 2025-09-19 11:17:42.005951 | orchestrator | 2025-09-19 11:17:42.005959 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 11:17:42.005968 | orchestrator | Friday 19 September 2025 11:17:35 +0000 (0:00:00.057) 0:06:42.576 ****** 2025-09-19 11:17:42.005976 | orchestrator | 2025-09-19 11:17:42.005984 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 11:17:42.005993 | orchestrator | Friday 19 September 2025 11:17:35 +0000 (0:00:00.040) 0:06:42.616 ****** 2025-09-19 11:17:42.006001 | orchestrator | 2025-09-19 11:17:42.006010 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 11:17:42.006076 | orchestrator | Friday 19 September 2025 11:17:35 +0000 (0:00:00.052) 0:06:42.669 ****** 2025-09-19 11:17:42.006085 | orchestrator | 2025-09-19 11:17:42.006094 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 11:17:42.006102 | orchestrator | Friday 19 September 2025 11:17:35 +0000 (0:00:00.039) 0:06:42.708 ****** 2025-09-19 11:17:42.006111 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:17:42.006120 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:17:42.006166 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:17:42.006176 | orchestrator | 2025-09-19 11:17:42.006184 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-19 11:17:42.006193 | orchestrator | Friday 19 September 2025 11:17:36 +0000 (0:00:01.192) 0:06:43.901 ****** 2025-09-19 11:17:42.006210 | orchestrator | changed: [testbed-manager] 2025-09-19 11:17:42.006218 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:42.006227 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:17:42.006235 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:42.006244 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:42.006252 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:42.006261 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:42.006269 | orchestrator | 2025-09-19 11:17:42.006278 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-19 11:17:42.006294 | orchestrator | Friday 19 September 2025 11:17:38 +0000 (0:00:01.341) 0:06:45.242 ****** 2025-09-19 11:17:42.006303 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:17:42.006311 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:42.006320 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:17:42.006328 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:42.006336 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:42.006345 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:42.006353 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:42.006362 | orchestrator | 2025-09-19 11:17:42.006370 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-19 11:17:42.006379 | orchestrator | Friday 19 September 2025 11:17:40 +0000 (0:00:02.753) 0:06:47.996 ****** 2025-09-19 11:17:42.006387 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:17:42.006396 | orchestrator | 2025-09-19 11:17:42.006413 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-19 11:17:42.006422 | orchestrator | Friday 19 September 2025 11:17:40 +0000 (0:00:00.106) 0:06:48.102 ****** 2025-09-19 11:17:42.006431 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:42.006439 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:42.006447 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:42.006456 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:42.006473 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:18:07.977060 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:18:07.977198 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:18:07.977214 | orchestrator | 2025-09-19 11:18:07.977227 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-19 11:18:07.977240 | orchestrator | Friday 19 September 2025 11:17:41 +0000 (0:00:01.007) 0:06:49.109 ****** 2025-09-19 11:18:07.977252 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:18:07.977263 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:18:07.977274 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:18:07.977284 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:18:07.977295 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:18:07.977305 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:18:07.977316 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:18:07.977327 | orchestrator | 2025-09-19 11:18:07.977338 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-19 11:18:07.977349 | orchestrator | Friday 19 September 2025 11:17:42 +0000 (0:00:00.554) 0:06:49.664 ****** 2025-09-19 11:18:07.977360 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:18:07.977374 | orchestrator | 2025-09-19 11:18:07.977385 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-19 11:18:07.977396 | orchestrator | Friday 19 September 2025 11:17:43 +0000 (0:00:01.056) 0:06:50.720 ****** 2025-09-19 11:18:07.977407 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:07.977419 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:18:07.977429 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:18:07.977440 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:18:07.977451 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:18:07.977461 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:18:07.977472 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:18:07.977483 | orchestrator | 2025-09-19 11:18:07.977493 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-19 11:18:07.977504 | orchestrator | Friday 19 September 2025 11:17:44 +0000 (0:00:00.850) 0:06:51.570 ****** 2025-09-19 11:18:07.977515 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-19 11:18:07.977526 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-19 11:18:07.977537 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-19 11:18:07.977573 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-19 11:18:07.977585 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-19 11:18:07.977595 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-19 11:18:07.977606 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-19 11:18:07.977616 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-19 11:18:07.977627 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-19 11:18:07.977638 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-19 11:18:07.977648 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-19 11:18:07.977659 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-19 11:18:07.977670 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-19 11:18:07.977693 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-19 11:18:07.977704 | orchestrator | 2025-09-19 11:18:07.977715 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-19 11:18:07.977726 | orchestrator | Friday 19 September 2025 11:17:47 +0000 (0:00:02.564) 0:06:54.135 ****** 2025-09-19 11:18:07.977736 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:18:07.977747 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:18:07.977758 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:18:07.977768 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:18:07.977779 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:18:07.977789 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:18:07.977800 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:18:07.977810 | orchestrator | 2025-09-19 11:18:07.977821 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-19 11:18:07.977831 | orchestrator | Friday 19 September 2025 11:17:47 +0000 (0:00:00.526) 0:06:54.662 ****** 2025-09-19 11:18:07.977844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:18:07.977857 | orchestrator | 2025-09-19 11:18:07.977867 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-19 11:18:07.977878 | orchestrator | Friday 19 September 2025 11:17:48 +0000 (0:00:00.981) 0:06:55.643 ****** 2025-09-19 11:18:07.977889 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:07.977899 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:18:07.977910 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:18:07.977920 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:18:07.977931 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:18:07.977941 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:18:07.977952 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:18:07.977962 | orchestrator | 2025-09-19 11:18:07.977973 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-19 11:18:07.977984 | orchestrator | Friday 19 September 2025 11:17:49 +0000 (0:00:00.862) 0:06:56.506 ****** 2025-09-19 11:18:07.977995 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:07.978005 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:18:07.978070 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:18:07.978084 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:18:07.978094 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:18:07.978121 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:18:07.978133 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:18:07.978143 | orchestrator | 2025-09-19 11:18:07.978155 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-19 11:18:07.978182 | orchestrator | Friday 19 September 2025 11:17:50 +0000 (0:00:00.809) 0:06:57.315 ****** 2025-09-19 11:18:07.978194 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:18:07.978204 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:18:07.978216 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:18:07.978254 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:18:07.978273 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:18:07.978291 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:18:07.978308 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:18:07.978325 | orchestrator | 2025-09-19 11:18:07.978342 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-19 11:18:07.978361 | orchestrator | Friday 19 September 2025 11:17:50 +0000 (0:00:00.542) 0:06:57.858 ****** 2025-09-19 11:18:07.978378 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:07.978397 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:18:07.978416 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:18:07.978434 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:18:07.978451 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:18:07.978470 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:18:07.978483 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:18:07.978494 | orchestrator | 2025-09-19 11:18:07.978505 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-19 11:18:07.978515 | orchestrator | Friday 19 September 2025 11:17:52 +0000 (0:00:01.833) 0:06:59.691 ****** 2025-09-19 11:18:07.978526 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:18:07.978536 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:18:07.978547 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:18:07.978558 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:18:07.978568 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:18:07.978578 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:18:07.978589 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:18:07.978599 | orchestrator | 2025-09-19 11:18:07.978610 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-19 11:18:07.978621 | orchestrator | Friday 19 September 2025 11:17:53 +0000 (0:00:00.545) 0:07:00.237 ****** 2025-09-19 11:18:07.978631 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:07.978642 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:18:07.978652 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:18:07.978662 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:18:07.978673 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:18:07.978683 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:18:07.978693 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:18:07.978704 | orchestrator | 2025-09-19 11:18:07.978714 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-19 11:18:07.978725 | orchestrator | Friday 19 September 2025 11:18:00 +0000 (0:00:07.440) 0:07:07.677 ****** 2025-09-19 11:18:07.978735 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:07.978746 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:18:07.978756 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:18:07.978766 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:18:07.978777 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:18:07.978787 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:18:07.978798 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:18:07.978808 | orchestrator | 2025-09-19 11:18:07.978819 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-19 11:18:07.978829 | orchestrator | Friday 19 September 2025 11:18:01 +0000 (0:00:01.310) 0:07:08.987 ****** 2025-09-19 11:18:07.978840 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:07.978850 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:18:07.978861 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:18:07.978871 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:18:07.978881 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:18:07.978900 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:18:07.978911 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:18:07.978921 | orchestrator | 2025-09-19 11:18:07.978932 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-19 11:18:07.978942 | orchestrator | Friday 19 September 2025 11:18:03 +0000 (0:00:01.722) 0:07:10.710 ****** 2025-09-19 11:18:07.978953 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:07.978973 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:18:07.978983 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:18:07.978993 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:18:07.979004 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:18:07.979014 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:18:07.979025 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:18:07.979035 | orchestrator | 2025-09-19 11:18:07.979046 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 11:18:07.979056 | orchestrator | Friday 19 September 2025 11:18:05 +0000 (0:00:01.971) 0:07:12.681 ****** 2025-09-19 11:18:07.979067 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:07.979077 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:18:07.979088 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:18:07.979098 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:18:07.979148 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:18:07.979159 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:18:07.979170 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:18:07.979180 | orchestrator | 2025-09-19 11:18:07.979191 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 11:18:07.979202 | orchestrator | Friday 19 September 2025 11:18:06 +0000 (0:00:00.911) 0:07:13.593 ****** 2025-09-19 11:18:07.979213 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:18:07.979223 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:18:07.979234 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:18:07.979244 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:18:07.979255 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:18:07.979265 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:18:07.979276 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:18:07.979286 | orchestrator | 2025-09-19 11:18:07.979297 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-19 11:18:07.979307 | orchestrator | Friday 19 September 2025 11:18:07 +0000 (0:00:00.958) 0:07:14.551 ****** 2025-09-19 11:18:07.979318 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:18:07.979328 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:18:07.979338 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:18:07.979349 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:18:07.979359 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:18:07.979370 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:18:07.979380 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:18:07.979391 | orchestrator | 2025-09-19 11:18:07.979411 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-19 11:18:40.568557 | orchestrator | Friday 19 September 2025 11:18:07 +0000 (0:00:00.533) 0:07:15.085 ****** 2025-09-19 11:18:40.568666 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:40.568682 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:18:40.568693 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:18:40.568704 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:18:40.568714 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:18:40.568725 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:18:40.568737 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:18:40.568748 | orchestrator | 2025-09-19 11:18:40.568760 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-19 11:18:40.568771 | orchestrator | Friday 19 September 2025 11:18:08 +0000 (0:00:00.586) 0:07:15.671 ****** 2025-09-19 11:18:40.568782 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:40.568792 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:18:40.568803 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:18:40.568813 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:18:40.568824 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:18:40.568834 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:18:40.568845 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:18:40.568855 | orchestrator | 2025-09-19 11:18:40.568866 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-19 11:18:40.568877 | orchestrator | Friday 19 September 2025 11:18:09 +0000 (0:00:00.548) 0:07:16.220 ****** 2025-09-19 11:18:40.568915 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:40.568926 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:18:40.568937 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:18:40.568947 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:18:40.568957 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:18:40.568968 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:18:40.568978 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:18:40.568989 | orchestrator | 2025-09-19 11:18:40.569000 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-19 11:18:40.569010 | orchestrator | Friday 19 September 2025 11:18:09 +0000 (0:00:00.535) 0:07:16.755 ****** 2025-09-19 11:18:40.569021 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:40.569031 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:18:40.569041 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:18:40.569052 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:18:40.569062 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:18:40.569107 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:18:40.569122 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:18:40.569134 | orchestrator | 2025-09-19 11:18:40.569146 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-19 11:18:40.569158 | orchestrator | Friday 19 September 2025 11:18:15 +0000 (0:00:05.771) 0:07:22.526 ****** 2025-09-19 11:18:40.569169 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:18:40.569182 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:18:40.569194 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:18:40.569206 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:18:40.569218 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:18:40.569230 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:18:40.569242 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:18:40.569255 | orchestrator | 2025-09-19 11:18:40.569267 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-19 11:18:40.569279 | orchestrator | Friday 19 September 2025 11:18:15 +0000 (0:00:00.488) 0:07:23.015 ****** 2025-09-19 11:18:40.569305 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:18:40.569321 | orchestrator | 2025-09-19 11:18:40.569333 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-19 11:18:40.569346 | orchestrator | Friday 19 September 2025 11:18:16 +0000 (0:00:00.707) 0:07:23.723 ****** 2025-09-19 11:18:40.569359 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:40.569378 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:18:40.569395 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:18:40.569411 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:18:40.569428 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:18:40.569444 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:18:40.569460 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:18:40.569478 | orchestrator | 2025-09-19 11:18:40.569495 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-19 11:18:40.569511 | orchestrator | Friday 19 September 2025 11:18:18 +0000 (0:00:01.848) 0:07:25.571 ****** 2025-09-19 11:18:40.569529 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:40.569547 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:18:40.569563 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:18:40.569582 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:18:40.569601 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:18:40.569619 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:18:40.569634 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:18:40.569645 | orchestrator | 2025-09-19 11:18:40.569656 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-19 11:18:40.569667 | orchestrator | Friday 19 September 2025 11:18:19 +0000 (0:00:01.032) 0:07:26.604 ****** 2025-09-19 11:18:40.569678 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:40.569688 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:18:40.569711 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:18:40.569721 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:18:40.569732 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:18:40.569742 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:18:40.569753 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:18:40.569763 | orchestrator | 2025-09-19 11:18:40.569774 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-19 11:18:40.569784 | orchestrator | Friday 19 September 2025 11:18:20 +0000 (0:00:00.838) 0:07:27.442 ****** 2025-09-19 11:18:40.569795 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 11:18:40.569808 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 11:18:40.569819 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 11:18:40.569848 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 11:18:40.569860 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 11:18:40.569870 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 11:18:40.569881 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 11:18:40.569892 | orchestrator | 2025-09-19 11:18:40.569903 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-19 11:18:40.569913 | orchestrator | Friday 19 September 2025 11:18:21 +0000 (0:00:01.673) 0:07:29.116 ****** 2025-09-19 11:18:40.569924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:18:40.569936 | orchestrator | 2025-09-19 11:18:40.569946 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-19 11:18:40.569957 | orchestrator | Friday 19 September 2025 11:18:22 +0000 (0:00:00.971) 0:07:30.087 ****** 2025-09-19 11:18:40.569967 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:18:40.569978 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:18:40.569989 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:18:40.569999 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:18:40.570010 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:18:40.570133 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:18:40.570153 | orchestrator | changed: [testbed-manager] 2025-09-19 11:18:40.570170 | orchestrator | 2025-09-19 11:18:40.570188 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-19 11:18:40.570206 | orchestrator | Friday 19 September 2025 11:18:32 +0000 (0:00:09.320) 0:07:39.408 ****** 2025-09-19 11:18:40.570223 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:40.570241 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:18:40.570261 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:18:40.570279 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:18:40.570297 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:18:40.570316 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:18:40.570334 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:18:40.570347 | orchestrator | 2025-09-19 11:18:40.570359 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-19 11:18:40.570370 | orchestrator | Friday 19 September 2025 11:18:34 +0000 (0:00:01.903) 0:07:41.311 ****** 2025-09-19 11:18:40.570380 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:18:40.570391 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:18:40.570412 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:18:40.570422 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:18:40.570433 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:18:40.570443 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:18:40.570453 | orchestrator | 2025-09-19 11:18:40.570464 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-19 11:18:40.570499 | orchestrator | Friday 19 September 2025 11:18:35 +0000 (0:00:01.333) 0:07:42.645 ****** 2025-09-19 11:18:40.570510 | orchestrator | changed: [testbed-manager] 2025-09-19 11:18:40.570521 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:18:40.570532 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:18:40.570553 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:18:40.570564 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:18:40.570574 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:18:40.570590 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:18:40.570610 | orchestrator | 2025-09-19 11:18:40.570629 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-19 11:18:40.570649 | orchestrator | 2025-09-19 11:18:40.570669 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-19 11:18:40.570690 | orchestrator | Friday 19 September 2025 11:18:36 +0000 (0:00:01.315) 0:07:43.961 ****** 2025-09-19 11:18:40.570703 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:18:40.570714 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:18:40.570724 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:18:40.570735 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:18:40.570745 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:18:40.570756 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:18:40.570766 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:18:40.570776 | orchestrator | 2025-09-19 11:18:40.570787 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-19 11:18:40.570797 | orchestrator | 2025-09-19 11:18:40.570808 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-19 11:18:40.570818 | orchestrator | Friday 19 September 2025 11:18:37 +0000 (0:00:00.555) 0:07:44.516 ****** 2025-09-19 11:18:40.570829 | orchestrator | changed: [testbed-manager] 2025-09-19 11:18:40.570839 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:18:40.570850 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:18:40.570860 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:18:40.570870 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:18:40.570881 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:18:40.570891 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:18:40.570902 | orchestrator | 2025-09-19 11:18:40.570912 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-19 11:18:40.570923 | orchestrator | Friday 19 September 2025 11:18:38 +0000 (0:00:01.500) 0:07:46.016 ****** 2025-09-19 11:18:40.570933 | orchestrator | ok: [testbed-manager] 2025-09-19 11:18:40.570943 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:18:40.570954 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:18:40.570964 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:18:40.570975 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:18:40.570985 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:18:40.570995 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:18:40.571005 | orchestrator | 2025-09-19 11:18:40.571016 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-19 11:18:40.571037 | orchestrator | Friday 19 September 2025 11:18:40 +0000 (0:00:01.654) 0:07:47.671 ****** 2025-09-19 11:19:03.934607 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:19:03.934696 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:19:03.934705 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:19:03.934713 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:19:03.934721 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:19:03.934728 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:19:03.934734 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:19:03.934760 | orchestrator | 2025-09-19 11:19:03.934769 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-19 11:19:03.934776 | orchestrator | Friday 19 September 2025 11:18:41 +0000 (0:00:00.509) 0:07:48.180 ****** 2025-09-19 11:19:03.934784 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:19:03.934792 | orchestrator | 2025-09-19 11:19:03.934799 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-19 11:19:03.934806 | orchestrator | Friday 19 September 2025 11:18:42 +0000 (0:00:00.997) 0:07:49.178 ****** 2025-09-19 11:19:03.934814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:19:03.934823 | orchestrator | 2025-09-19 11:19:03.934830 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-19 11:19:03.934836 | orchestrator | Friday 19 September 2025 11:18:42 +0000 (0:00:00.809) 0:07:49.987 ****** 2025-09-19 11:19:03.934843 | orchestrator | changed: [testbed-manager] 2025-09-19 11:19:03.934849 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:19:03.934855 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:19:03.934862 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:19:03.934868 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:19:03.934875 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:19:03.934881 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:19:03.934887 | orchestrator | 2025-09-19 11:19:03.934894 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-19 11:19:03.934900 | orchestrator | Friday 19 September 2025 11:18:51 +0000 (0:00:08.278) 0:07:58.266 ****** 2025-09-19 11:19:03.934907 | orchestrator | changed: [testbed-manager] 2025-09-19 11:19:03.934913 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:19:03.934920 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:19:03.934926 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:19:03.934932 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:19:03.934939 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:19:03.934945 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:19:03.934952 | orchestrator | 2025-09-19 11:19:03.934958 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-19 11:19:03.934965 | orchestrator | Friday 19 September 2025 11:18:51 +0000 (0:00:00.819) 0:07:59.086 ****** 2025-09-19 11:19:03.934971 | orchestrator | changed: [testbed-manager] 2025-09-19 11:19:03.934978 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:19:03.934984 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:19:03.934990 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:19:03.934997 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:19:03.935004 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:19:03.935010 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:19:03.935016 | orchestrator | 2025-09-19 11:19:03.935023 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-19 11:19:03.935030 | orchestrator | Friday 19 September 2025 11:18:53 +0000 (0:00:01.414) 0:08:00.500 ****** 2025-09-19 11:19:03.935036 | orchestrator | changed: [testbed-manager] 2025-09-19 11:19:03.935043 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:19:03.935123 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:19:03.935131 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:19:03.935138 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:19:03.935144 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:19:03.935151 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:19:03.935158 | orchestrator | 2025-09-19 11:19:03.935165 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-19 11:19:03.935173 | orchestrator | Friday 19 September 2025 11:18:55 +0000 (0:00:01.747) 0:08:02.247 ****** 2025-09-19 11:19:03.935180 | orchestrator | changed: [testbed-manager] 2025-09-19 11:19:03.935195 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:19:03.935202 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:19:03.935209 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:19:03.935217 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:19:03.935224 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:19:03.935231 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:19:03.935239 | orchestrator | 2025-09-19 11:19:03.935246 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-19 11:19:03.935254 | orchestrator | Friday 19 September 2025 11:18:56 +0000 (0:00:01.229) 0:08:03.476 ****** 2025-09-19 11:19:03.935262 | orchestrator | changed: [testbed-manager] 2025-09-19 11:19:03.935269 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:19:03.935276 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:19:03.935283 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:19:03.935291 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:19:03.935298 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:19:03.935305 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:19:03.935313 | orchestrator | 2025-09-19 11:19:03.935320 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-19 11:19:03.935328 | orchestrator | 2025-09-19 11:19:03.935335 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-19 11:19:03.935343 | orchestrator | Friday 19 September 2025 11:18:57 +0000 (0:00:01.343) 0:08:04.819 ****** 2025-09-19 11:19:03.935350 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:19:03.935358 | orchestrator | 2025-09-19 11:19:03.935365 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-19 11:19:03.935385 | orchestrator | Friday 19 September 2025 11:18:58 +0000 (0:00:00.855) 0:08:05.675 ****** 2025-09-19 11:19:03.935393 | orchestrator | ok: [testbed-manager] 2025-09-19 11:19:03.935400 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:19:03.935407 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:19:03.935413 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:19:03.935420 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:19:03.935426 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:19:03.935433 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:19:03.935439 | orchestrator | 2025-09-19 11:19:03.935446 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-19 11:19:03.935453 | orchestrator | Friday 19 September 2025 11:18:59 +0000 (0:00:00.830) 0:08:06.505 ****** 2025-09-19 11:19:03.935459 | orchestrator | changed: [testbed-manager] 2025-09-19 11:19:03.935502 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:19:03.935510 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:19:03.935516 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:19:03.935523 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:19:03.935529 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:19:03.935536 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:19:03.935542 | orchestrator | 2025-09-19 11:19:03.935548 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-19 11:19:03.935555 | orchestrator | Friday 19 September 2025 11:19:00 +0000 (0:00:01.440) 0:08:07.945 ****** 2025-09-19 11:19:03.935562 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:19:03.935568 | orchestrator | 2025-09-19 11:19:03.935575 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-19 11:19:03.935581 | orchestrator | Friday 19 September 2025 11:19:01 +0000 (0:00:00.865) 0:08:08.811 ****** 2025-09-19 11:19:03.935588 | orchestrator | ok: [testbed-manager] 2025-09-19 11:19:03.935594 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:19:03.935601 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:19:03.935607 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:19:03.935614 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:19:03.935628 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:19:03.935634 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:19:03.935641 | orchestrator | 2025-09-19 11:19:03.935647 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-19 11:19:03.935654 | orchestrator | Friday 19 September 2025 11:19:02 +0000 (0:00:00.887) 0:08:09.699 ****** 2025-09-19 11:19:03.935660 | orchestrator | changed: [testbed-manager] 2025-09-19 11:19:03.935667 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:19:03.935673 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:19:03.935680 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:19:03.935686 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:19:03.935693 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:19:03.935699 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:19:03.935706 | orchestrator | 2025-09-19 11:19:03.935712 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:19:03.935720 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-19 11:19:03.935727 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-19 11:19:03.935737 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 11:19:03.935744 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 11:19:03.935751 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 11:19:03.935757 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 11:19:03.935764 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 11:19:03.935771 | orchestrator | 2025-09-19 11:19:03.935777 | orchestrator | 2025-09-19 11:19:03.935784 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:19:03.935794 | orchestrator | Friday 19 September 2025 11:19:03 +0000 (0:00:01.329) 0:08:11.028 ****** 2025-09-19 11:19:03.935805 | orchestrator | =============================================================================== 2025-09-19 11:19:03.935815 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.40s 2025-09-19 11:19:03.935824 | orchestrator | osism.commons.packages : Download required packages -------------------- 40.52s 2025-09-19 11:19:03.935835 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 32.81s 2025-09-19 11:19:03.935846 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.75s 2025-09-19 11:19:03.935856 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 18.09s 2025-09-19 11:19:03.935867 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.37s 2025-09-19 11:19:03.935879 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.13s 2025-09-19 11:19:03.935889 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.54s 2025-09-19 11:19:03.935900 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.49s 2025-09-19 11:19:03.935911 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.32s 2025-09-19 11:19:03.935930 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.28s 2025-09-19 11:19:04.410748 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.75s 2025-09-19 11:19:04.410844 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.60s 2025-09-19 11:19:04.410886 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.44s 2025-09-19 11:19:04.410898 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.43s 2025-09-19 11:19:04.410909 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.32s 2025-09-19 11:19:04.410920 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.36s 2025-09-19 11:19:04.410931 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.16s 2025-09-19 11:19:04.410942 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.77s 2025-09-19 11:19:04.410953 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.63s 2025-09-19 11:19:04.747896 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-19 11:19:04.747989 | orchestrator | + osism apply network 2025-09-19 11:19:17.418381 | orchestrator | 2025-09-19 11:19:17 | INFO  | Task ba95f1ac-48b2-4b5d-90c0-044b4e6f1aaf (network) was prepared for execution. 2025-09-19 11:19:17.418486 | orchestrator | 2025-09-19 11:19:17 | INFO  | It takes a moment until task ba95f1ac-48b2-4b5d-90c0-044b4e6f1aaf (network) has been started and output is visible here. 2025-09-19 11:19:45.693911 | orchestrator | 2025-09-19 11:19:45.693984 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-19 11:19:45.693991 | orchestrator | 2025-09-19 11:19:45.693995 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-19 11:19:45.694000 | orchestrator | Friday 19 September 2025 11:19:21 +0000 (0:00:00.275) 0:00:00.275 ****** 2025-09-19 11:19:45.694004 | orchestrator | ok: [testbed-manager] 2025-09-19 11:19:45.694043 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:19:45.694048 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:19:45.694052 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:19:45.694057 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:19:45.694061 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:19:45.694064 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:19:45.694069 | orchestrator | 2025-09-19 11:19:45.694073 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-19 11:19:45.694077 | orchestrator | Friday 19 September 2025 11:19:22 +0000 (0:00:00.712) 0:00:00.987 ****** 2025-09-19 11:19:45.694083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:19:45.694088 | orchestrator | 2025-09-19 11:19:45.694092 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-19 11:19:45.694096 | orchestrator | Friday 19 September 2025 11:19:23 +0000 (0:00:01.191) 0:00:02.178 ****** 2025-09-19 11:19:45.694100 | orchestrator | ok: [testbed-manager] 2025-09-19 11:19:45.694104 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:19:45.694107 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:19:45.694111 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:19:45.694115 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:19:45.694119 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:19:45.694122 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:19:45.694126 | orchestrator | 2025-09-19 11:19:45.694145 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-19 11:19:45.694149 | orchestrator | Friday 19 September 2025 11:19:25 +0000 (0:00:02.013) 0:00:04.192 ****** 2025-09-19 11:19:45.694153 | orchestrator | ok: [testbed-manager] 2025-09-19 11:19:45.694157 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:19:45.694160 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:19:45.694164 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:19:45.694168 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:19:45.694171 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:19:45.694175 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:19:45.694179 | orchestrator | 2025-09-19 11:19:45.694182 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-19 11:19:45.694201 | orchestrator | Friday 19 September 2025 11:19:27 +0000 (0:00:01.935) 0:00:06.128 ****** 2025-09-19 11:19:45.694205 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-19 11:19:45.694209 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-19 11:19:45.694213 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-19 11:19:45.694216 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-19 11:19:45.694220 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-19 11:19:45.694224 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-19 11:19:45.694227 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-19 11:19:45.694231 | orchestrator | 2025-09-19 11:19:45.694235 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-19 11:19:45.694239 | orchestrator | Friday 19 September 2025 11:19:28 +0000 (0:00:00.984) 0:00:07.112 ****** 2025-09-19 11:19:45.694243 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:19:45.694247 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:19:45.694251 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 11:19:45.694255 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 11:19:45.694258 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 11:19:45.694262 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 11:19:45.694266 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 11:19:45.694269 | orchestrator | 2025-09-19 11:19:45.694273 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-19 11:19:45.694277 | orchestrator | Friday 19 September 2025 11:19:31 +0000 (0:00:03.309) 0:00:10.422 ****** 2025-09-19 11:19:45.694280 | orchestrator | changed: [testbed-manager] 2025-09-19 11:19:45.694285 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:19:45.694288 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:19:45.694292 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:19:45.694296 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:19:45.694299 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:19:45.694303 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:19:45.694307 | orchestrator | 2025-09-19 11:19:45.694310 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-19 11:19:45.694314 | orchestrator | Friday 19 September 2025 11:19:33 +0000 (0:00:01.489) 0:00:11.911 ****** 2025-09-19 11:19:45.694318 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:19:45.694321 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:19:45.694325 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 11:19:45.694329 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 11:19:45.694332 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 11:19:45.694336 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 11:19:45.694340 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 11:19:45.694343 | orchestrator | 2025-09-19 11:19:45.694347 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-19 11:19:45.694351 | orchestrator | Friday 19 September 2025 11:19:35 +0000 (0:00:01.948) 0:00:13.859 ****** 2025-09-19 11:19:45.694354 | orchestrator | ok: [testbed-manager] 2025-09-19 11:19:45.694358 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:19:45.694362 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:19:45.694365 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:19:45.694369 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:19:45.694372 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:19:45.694376 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:19:45.694380 | orchestrator | 2025-09-19 11:19:45.694383 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-19 11:19:45.694397 | orchestrator | Friday 19 September 2025 11:19:36 +0000 (0:00:01.088) 0:00:14.948 ****** 2025-09-19 11:19:45.694401 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:19:45.694405 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:19:45.694409 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:19:45.694416 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:19:45.694420 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:19:45.694424 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:19:45.694427 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:19:45.694431 | orchestrator | 2025-09-19 11:19:45.694435 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-19 11:19:45.694438 | orchestrator | Friday 19 September 2025 11:19:36 +0000 (0:00:00.665) 0:00:15.613 ****** 2025-09-19 11:19:45.694442 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:19:45.694446 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:19:45.694449 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:19:45.694453 | orchestrator | ok: [testbed-manager] 2025-09-19 11:19:45.694457 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:19:45.694460 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:19:45.694464 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:19:45.694467 | orchestrator | 2025-09-19 11:19:45.694471 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-19 11:19:45.694475 | orchestrator | Friday 19 September 2025 11:19:38 +0000 (0:00:01.945) 0:00:17.559 ****** 2025-09-19 11:19:45.694479 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:19:45.694484 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:19:45.694488 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:19:45.694492 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:19:45.694496 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:19:45.694510 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:19:45.694515 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-19 11:19:45.694520 | orchestrator | 2025-09-19 11:19:45.694524 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-19 11:19:45.694528 | orchestrator | Friday 19 September 2025 11:19:39 +0000 (0:00:00.888) 0:00:18.448 ****** 2025-09-19 11:19:45.694533 | orchestrator | ok: [testbed-manager] 2025-09-19 11:19:45.694537 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:19:45.694541 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:19:45.694545 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:19:45.694549 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:19:45.694553 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:19:45.694558 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:19:45.694562 | orchestrator | 2025-09-19 11:19:45.694566 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-19 11:19:45.694570 | orchestrator | Friday 19 September 2025 11:19:41 +0000 (0:00:01.541) 0:00:19.990 ****** 2025-09-19 11:19:45.694575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:19:45.694581 | orchestrator | 2025-09-19 11:19:45.694585 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-19 11:19:45.694589 | orchestrator | Friday 19 September 2025 11:19:42 +0000 (0:00:01.275) 0:00:21.265 ****** 2025-09-19 11:19:45.694593 | orchestrator | ok: [testbed-manager] 2025-09-19 11:19:45.694597 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:19:45.694602 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:19:45.694606 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:19:45.694610 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:19:45.694614 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:19:45.694618 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:19:45.694622 | orchestrator | 2025-09-19 11:19:45.694626 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-19 11:19:45.694631 | orchestrator | Friday 19 September 2025 11:19:43 +0000 (0:00:00.959) 0:00:22.225 ****** 2025-09-19 11:19:45.694635 | orchestrator | ok: [testbed-manager] 2025-09-19 11:19:45.694639 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:19:45.694643 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:19:45.694651 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:19:45.694656 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:19:45.694660 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:19:45.694664 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:19:45.694668 | orchestrator | 2025-09-19 11:19:45.694672 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-19 11:19:45.694676 | orchestrator | Friday 19 September 2025 11:19:44 +0000 (0:00:00.871) 0:00:23.096 ****** 2025-09-19 11:19:45.694681 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 11:19:45.694685 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 11:19:45.694689 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 11:19:45.694693 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 11:19:45.694698 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 11:19:45.694702 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 11:19:45.694706 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 11:19:45.694710 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 11:19:45.694715 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 11:19:45.694719 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 11:19:45.694722 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 11:19:45.694727 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 11:19:45.694731 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 11:19:45.694735 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 11:19:45.694740 | orchestrator | 2025-09-19 11:19:45.694746 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-19 11:20:01.717686 | orchestrator | Friday 19 September 2025 11:19:45 +0000 (0:00:01.195) 0:00:24.292 ****** 2025-09-19 11:20:01.717787 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:20:01.717802 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:20:01.717813 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:20:01.717824 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:20:01.717834 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:01.717845 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:20:01.717857 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:20:01.717868 | orchestrator | 2025-09-19 11:20:01.717879 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-19 11:20:01.717890 | orchestrator | Friday 19 September 2025 11:19:46 +0000 (0:00:00.615) 0:00:24.907 ****** 2025-09-19 11:20:01.717902 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-2, testbed-node-0, testbed-manager, testbed-node-1, testbed-node-5, testbed-node-3, testbed-node-4 2025-09-19 11:20:01.717916 | orchestrator | 2025-09-19 11:20:01.717927 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-19 11:20:01.717938 | orchestrator | Friday 19 September 2025 11:19:50 +0000 (0:00:04.705) 0:00:29.612 ****** 2025-09-19 11:20:01.717963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:20:01.717975 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:20:01.717988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:20:01.718127 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:20:01.718141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:20:01.718152 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:20:01.718163 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:20:01.718174 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:20:01.718185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:20:01.718202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:20:01.718214 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:20:01.718241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:20:01.718255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:20:01.718267 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:20:01.718280 | orchestrator | 2025-09-19 11:20:01.718293 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-19 11:20:01.718306 | orchestrator | Friday 19 September 2025 11:19:56 +0000 (0:00:05.212) 0:00:34.824 ****** 2025-09-19 11:20:01.718319 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:20:01.718344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:20:01.718358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:20:01.718371 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:20:01.718383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:20:01.718395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:20:01.718408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:20:01.718420 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:20:01.718433 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:20:01.718446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:20:01.718458 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:20:01.718471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:20:01.718494 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:20:07.457808 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:20:07.457898 | orchestrator | 2025-09-19 11:20:07.457914 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-19 11:20:07.457927 | orchestrator | Friday 19 September 2025 11:20:01 +0000 (0:00:05.490) 0:00:40.315 ****** 2025-09-19 11:20:07.457964 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:20:07.458090 | orchestrator | 2025-09-19 11:20:07.458124 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-19 11:20:07.458146 | orchestrator | Friday 19 September 2025 11:20:02 +0000 (0:00:01.111) 0:00:41.426 ****** 2025-09-19 11:20:07.458165 | orchestrator | ok: [testbed-manager] 2025-09-19 11:20:07.458178 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:20:07.458188 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:20:07.458199 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:20:07.458209 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:20:07.458220 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:20:07.458230 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:20:07.458241 | orchestrator | 2025-09-19 11:20:07.458252 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-19 11:20:07.458263 | orchestrator | Friday 19 September 2025 11:20:03 +0000 (0:00:01.154) 0:00:42.581 ****** 2025-09-19 11:20:07.458274 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 11:20:07.458285 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 11:20:07.458295 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 11:20:07.458306 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 11:20:07.458316 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:20:07.458328 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 11:20:07.458339 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 11:20:07.458349 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 11:20:07.458360 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 11:20:07.458373 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:20:07.458384 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 11:20:07.458396 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 11:20:07.458408 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 11:20:07.458421 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 11:20:07.458433 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:20:07.458446 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 11:20:07.458458 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 11:20:07.458485 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 11:20:07.458499 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 11:20:07.458511 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:20:07.458523 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 11:20:07.458535 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 11:20:07.458547 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 11:20:07.458560 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 11:20:07.458571 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:07.458584 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 11:20:07.458596 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 11:20:07.458619 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 11:20:07.458631 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 11:20:07.458644 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:20:07.458656 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 11:20:07.458668 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 11:20:07.458681 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 11:20:07.458693 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 11:20:07.458706 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:20:07.458718 | orchestrator | 2025-09-19 11:20:07.458729 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-19 11:20:07.458756 | orchestrator | Friday 19 September 2025 11:20:05 +0000 (0:00:02.030) 0:00:44.612 ****** 2025-09-19 11:20:07.458768 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:20:07.458778 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:20:07.458789 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:20:07.458799 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:20:07.458810 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:07.458820 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:20:07.458831 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:20:07.458841 | orchestrator | 2025-09-19 11:20:07.458852 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-19 11:20:07.458862 | orchestrator | Friday 19 September 2025 11:20:06 +0000 (0:00:00.578) 0:00:45.191 ****** 2025-09-19 11:20:07.458873 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:20:07.458883 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:20:07.458894 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:20:07.458904 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:20:07.458915 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:07.458925 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:20:07.458936 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:20:07.458946 | orchestrator | 2025-09-19 11:20:07.458957 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:20:07.458972 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:20:07.458984 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:20:07.459026 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:20:07.459037 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:20:07.459047 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:20:07.459058 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:20:07.459069 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:20:07.459080 | orchestrator | 2025-09-19 11:20:07.459090 | orchestrator | 2025-09-19 11:20:07.459101 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:20:07.459112 | orchestrator | Friday 19 September 2025 11:20:07 +0000 (0:00:00.615) 0:00:45.807 ****** 2025-09-19 11:20:07.459123 | orchestrator | =============================================================================== 2025-09-19 11:20:07.459141 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.49s 2025-09-19 11:20:07.459151 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.21s 2025-09-19 11:20:07.459162 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.71s 2025-09-19 11:20:07.459172 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.31s 2025-09-19 11:20:07.459183 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.03s 2025-09-19 11:20:07.459193 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.01s 2025-09-19 11:20:07.459204 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.95s 2025-09-19 11:20:07.459214 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.95s 2025-09-19 11:20:07.459225 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.94s 2025-09-19 11:20:07.459236 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.54s 2025-09-19 11:20:07.459246 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.49s 2025-09-19 11:20:07.459257 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.28s 2025-09-19 11:20:07.459267 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.20s 2025-09-19 11:20:07.459278 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.19s 2025-09-19 11:20:07.459289 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.15s 2025-09-19 11:20:07.459299 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.11s 2025-09-19 11:20:07.459310 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.09s 2025-09-19 11:20:07.459320 | orchestrator | osism.commons.network : Create required directories --------------------- 0.98s 2025-09-19 11:20:07.459331 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.96s 2025-09-19 11:20:07.459341 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.89s 2025-09-19 11:20:07.660182 | orchestrator | + osism apply wireguard 2025-09-19 11:20:19.650204 | orchestrator | 2025-09-19 11:20:19 | INFO  | Task f3a33d0f-8632-4d8a-b315-83b2483d70c4 (wireguard) was prepared for execution. 2025-09-19 11:20:19.650304 | orchestrator | 2025-09-19 11:20:19 | INFO  | It takes a moment until task f3a33d0f-8632-4d8a-b315-83b2483d70c4 (wireguard) has been started and output is visible here. 2025-09-19 11:20:38.400573 | orchestrator | 2025-09-19 11:20:38.400714 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-19 11:20:38.400742 | orchestrator | 2025-09-19 11:20:38.400764 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-19 11:20:38.400784 | orchestrator | Friday 19 September 2025 11:20:23 +0000 (0:00:00.182) 0:00:00.182 ****** 2025-09-19 11:20:38.400803 | orchestrator | ok: [testbed-manager] 2025-09-19 11:20:38.400824 | orchestrator | 2025-09-19 11:20:38.400843 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-19 11:20:38.400863 | orchestrator | Friday 19 September 2025 11:20:24 +0000 (0:00:01.257) 0:00:01.440 ****** 2025-09-19 11:20:38.400881 | orchestrator | changed: [testbed-manager] 2025-09-19 11:20:38.400902 | orchestrator | 2025-09-19 11:20:38.400920 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-19 11:20:38.400938 | orchestrator | Friday 19 September 2025 11:20:30 +0000 (0:00:06.022) 0:00:07.462 ****** 2025-09-19 11:20:38.400957 | orchestrator | changed: [testbed-manager] 2025-09-19 11:20:38.401033 | orchestrator | 2025-09-19 11:20:38.401051 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-19 11:20:38.401069 | orchestrator | Friday 19 September 2025 11:20:31 +0000 (0:00:00.550) 0:00:08.013 ****** 2025-09-19 11:20:38.401107 | orchestrator | changed: [testbed-manager] 2025-09-19 11:20:38.401155 | orchestrator | 2025-09-19 11:20:38.401176 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-19 11:20:38.401195 | orchestrator | Friday 19 September 2025 11:20:31 +0000 (0:00:00.415) 0:00:08.428 ****** 2025-09-19 11:20:38.401214 | orchestrator | ok: [testbed-manager] 2025-09-19 11:20:38.401232 | orchestrator | 2025-09-19 11:20:38.401250 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-19 11:20:38.401269 | orchestrator | Friday 19 September 2025 11:20:32 +0000 (0:00:00.517) 0:00:08.946 ****** 2025-09-19 11:20:38.401283 | orchestrator | ok: [testbed-manager] 2025-09-19 11:20:38.401296 | orchestrator | 2025-09-19 11:20:38.401308 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-19 11:20:38.401320 | orchestrator | Friday 19 September 2025 11:20:32 +0000 (0:00:00.545) 0:00:09.492 ****** 2025-09-19 11:20:38.401332 | orchestrator | ok: [testbed-manager] 2025-09-19 11:20:38.401344 | orchestrator | 2025-09-19 11:20:38.401356 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-19 11:20:38.401368 | orchestrator | Friday 19 September 2025 11:20:33 +0000 (0:00:00.411) 0:00:09.903 ****** 2025-09-19 11:20:38.401380 | orchestrator | changed: [testbed-manager] 2025-09-19 11:20:38.401391 | orchestrator | 2025-09-19 11:20:38.401403 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-19 11:20:38.401416 | orchestrator | Friday 19 September 2025 11:20:34 +0000 (0:00:01.192) 0:00:11.095 ****** 2025-09-19 11:20:38.401428 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 11:20:38.401441 | orchestrator | changed: [testbed-manager] 2025-09-19 11:20:38.401452 | orchestrator | 2025-09-19 11:20:38.401463 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-19 11:20:38.401474 | orchestrator | Friday 19 September 2025 11:20:35 +0000 (0:00:00.946) 0:00:12.042 ****** 2025-09-19 11:20:38.401484 | orchestrator | changed: [testbed-manager] 2025-09-19 11:20:38.401495 | orchestrator | 2025-09-19 11:20:38.401505 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-19 11:20:38.401516 | orchestrator | Friday 19 September 2025 11:20:37 +0000 (0:00:01.682) 0:00:13.724 ****** 2025-09-19 11:20:38.401526 | orchestrator | changed: [testbed-manager] 2025-09-19 11:20:38.401537 | orchestrator | 2025-09-19 11:20:38.401547 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:20:38.401558 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:20:38.401570 | orchestrator | 2025-09-19 11:20:38.401581 | orchestrator | 2025-09-19 11:20:38.401591 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:20:38.401602 | orchestrator | Friday 19 September 2025 11:20:38 +0000 (0:00:00.919) 0:00:14.644 ****** 2025-09-19 11:20:38.401613 | orchestrator | =============================================================================== 2025-09-19 11:20:38.401623 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.02s 2025-09-19 11:20:38.401634 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.68s 2025-09-19 11:20:38.401644 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.26s 2025-09-19 11:20:38.401655 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.19s 2025-09-19 11:20:38.401665 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.95s 2025-09-19 11:20:38.401676 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.92s 2025-09-19 11:20:38.401686 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2025-09-19 11:20:38.401697 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.55s 2025-09-19 11:20:38.401707 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-09-19 11:20:38.401717 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2025-09-19 11:20:38.401738 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-09-19 11:20:38.699646 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-19 11:20:38.732298 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-19 11:20:38.732416 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-19 11:20:38.809797 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 192 0 --:--:-- --:--:-- --:--:-- 194 2025-09-19 11:20:38.822834 | orchestrator | + osism apply --environment custom workarounds 2025-09-19 11:20:40.728272 | orchestrator | 2025-09-19 11:20:40 | INFO  | Trying to run play workarounds in environment custom 2025-09-19 11:20:50.946835 | orchestrator | 2025-09-19 11:20:50 | INFO  | Task ff2b86a2-04b4-48b7-ad40-5f5872de1e21 (workarounds) was prepared for execution. 2025-09-19 11:20:50.947006 | orchestrator | 2025-09-19 11:20:50 | INFO  | It takes a moment until task ff2b86a2-04b4-48b7-ad40-5f5872de1e21 (workarounds) has been started and output is visible here. 2025-09-19 11:21:15.960323 | orchestrator | 2025-09-19 11:21:15.960432 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:21:15.960446 | orchestrator | 2025-09-19 11:21:15.960456 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-19 11:21:15.960467 | orchestrator | Friday 19 September 2025 11:20:54 +0000 (0:00:00.148) 0:00:00.148 ****** 2025-09-19 11:21:15.960478 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-19 11:21:15.960489 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-19 11:21:15.960512 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-19 11:21:15.960522 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-19 11:21:15.960532 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-19 11:21:15.960542 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-19 11:21:15.960552 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-19 11:21:15.960562 | orchestrator | 2025-09-19 11:21:15.960572 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-19 11:21:15.960581 | orchestrator | 2025-09-19 11:21:15.960591 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-19 11:21:15.960601 | orchestrator | Friday 19 September 2025 11:20:55 +0000 (0:00:00.774) 0:00:00.922 ****** 2025-09-19 11:21:15.960611 | orchestrator | ok: [testbed-manager] 2025-09-19 11:21:15.960623 | orchestrator | 2025-09-19 11:21:15.960633 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-19 11:21:15.960642 | orchestrator | 2025-09-19 11:21:15.960652 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-19 11:21:15.960662 | orchestrator | Friday 19 September 2025 11:20:58 +0000 (0:00:02.371) 0:00:03.294 ****** 2025-09-19 11:21:15.960672 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:21:15.960682 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:21:15.960692 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:21:15.960702 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:21:15.960711 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:21:15.960721 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:21:15.960731 | orchestrator | 2025-09-19 11:21:15.960742 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-19 11:21:15.960752 | orchestrator | 2025-09-19 11:21:15.960762 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-19 11:21:15.960772 | orchestrator | Friday 19 September 2025 11:20:59 +0000 (0:00:01.891) 0:00:05.185 ****** 2025-09-19 11:21:15.960783 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 11:21:15.960794 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 11:21:15.960823 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 11:21:15.960834 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 11:21:15.960844 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 11:21:15.960853 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 11:21:15.960863 | orchestrator | 2025-09-19 11:21:15.960875 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-19 11:21:15.960886 | orchestrator | Friday 19 September 2025 11:21:01 +0000 (0:00:01.492) 0:00:06.678 ****** 2025-09-19 11:21:15.960897 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:21:15.960909 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:21:15.960920 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:21:15.960951 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:21:15.960962 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:21:15.960972 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:21:15.960982 | orchestrator | 2025-09-19 11:21:15.960993 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-19 11:21:15.961004 | orchestrator | Friday 19 September 2025 11:21:05 +0000 (0:00:03.684) 0:00:10.363 ****** 2025-09-19 11:21:15.961015 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:21:15.961025 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:21:15.961036 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:21:15.961046 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:15.961057 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:15.961068 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:15.961079 | orchestrator | 2025-09-19 11:21:15.961089 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-19 11:21:15.961099 | orchestrator | 2025-09-19 11:21:15.961108 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-19 11:21:15.961118 | orchestrator | Friday 19 September 2025 11:21:05 +0000 (0:00:00.735) 0:00:11.099 ****** 2025-09-19 11:21:15.961127 | orchestrator | changed: [testbed-manager] 2025-09-19 11:21:15.961136 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:21:15.961145 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:21:15.961154 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:21:15.961164 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:21:15.961173 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:21:15.961182 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:21:15.961191 | orchestrator | 2025-09-19 11:21:15.961200 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-19 11:21:15.961210 | orchestrator | Friday 19 September 2025 11:21:07 +0000 (0:00:01.677) 0:00:12.777 ****** 2025-09-19 11:21:15.961219 | orchestrator | changed: [testbed-manager] 2025-09-19 11:21:15.961228 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:21:15.961238 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:21:15.961247 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:21:15.961256 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:21:15.961265 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:21:15.961288 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:21:15.961298 | orchestrator | 2025-09-19 11:21:15.961307 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-19 11:21:15.961317 | orchestrator | Friday 19 September 2025 11:21:09 +0000 (0:00:01.659) 0:00:14.436 ****** 2025-09-19 11:21:15.961326 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:21:15.961336 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:21:15.961345 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:21:15.961354 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:21:15.961363 | orchestrator | ok: [testbed-manager] 2025-09-19 11:21:15.961380 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:21:15.961389 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:21:15.961398 | orchestrator | 2025-09-19 11:21:15.961413 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-19 11:21:15.961422 | orchestrator | Friday 19 September 2025 11:21:10 +0000 (0:00:01.528) 0:00:15.965 ****** 2025-09-19 11:21:15.961432 | orchestrator | changed: [testbed-manager] 2025-09-19 11:21:15.961441 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:21:15.961450 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:21:15.961459 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:21:15.961468 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:21:15.961478 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:21:15.961487 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:21:15.961496 | orchestrator | 2025-09-19 11:21:15.961505 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-19 11:21:15.961514 | orchestrator | Friday 19 September 2025 11:21:12 +0000 (0:00:01.761) 0:00:17.726 ****** 2025-09-19 11:21:15.961524 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:21:15.961533 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:21:15.961542 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:21:15.961551 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:21:15.961560 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:15.961570 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:15.961579 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:15.961588 | orchestrator | 2025-09-19 11:21:15.961598 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-19 11:21:15.961607 | orchestrator | 2025-09-19 11:21:15.961616 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-19 11:21:15.961626 | orchestrator | Friday 19 September 2025 11:21:13 +0000 (0:00:00.627) 0:00:18.353 ****** 2025-09-19 11:21:15.961635 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:21:15.961644 | orchestrator | ok: [testbed-manager] 2025-09-19 11:21:15.961654 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:21:15.961663 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:21:15.961672 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:21:15.961681 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:21:15.961690 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:21:15.961700 | orchestrator | 2025-09-19 11:21:15.961709 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:21:15.961719 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:21:15.961730 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:21:15.961740 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:21:15.961749 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:21:15.961758 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:21:15.961768 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:21:15.961777 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:21:15.961786 | orchestrator | 2025-09-19 11:21:15.961795 | orchestrator | 2025-09-19 11:21:15.961805 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:21:15.961814 | orchestrator | Friday 19 September 2025 11:21:15 +0000 (0:00:02.787) 0:00:21.141 ****** 2025-09-19 11:21:15.961830 | orchestrator | =============================================================================== 2025-09-19 11:21:15.961839 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.69s 2025-09-19 11:21:15.961848 | orchestrator | Install python3-docker -------------------------------------------------- 2.79s 2025-09-19 11:21:15.961858 | orchestrator | Apply netplan configuration --------------------------------------------- 2.37s 2025-09-19 11:21:15.961867 | orchestrator | Apply netplan configuration --------------------------------------------- 1.89s 2025-09-19 11:21:15.961876 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.76s 2025-09-19 11:21:15.961885 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.68s 2025-09-19 11:21:15.961894 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.66s 2025-09-19 11:21:15.961904 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.53s 2025-09-19 11:21:15.961913 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.49s 2025-09-19 11:21:15.961922 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.77s 2025-09-19 11:21:15.961946 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.74s 2025-09-19 11:21:15.961961 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2025-09-19 11:21:16.658834 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-19 11:21:28.460484 | orchestrator | 2025-09-19 11:21:28 | INFO  | Task fc82f3b1-ddc5-4296-8643-224a52492abf (reboot) was prepared for execution. 2025-09-19 11:21:28.460596 | orchestrator | 2025-09-19 11:21:28 | INFO  | It takes a moment until task fc82f3b1-ddc5-4296-8643-224a52492abf (reboot) has been started and output is visible here. 2025-09-19 11:21:38.018734 | orchestrator | 2025-09-19 11:21:38.018849 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 11:21:38.018865 | orchestrator | 2025-09-19 11:21:38.018876 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 11:21:38.018887 | orchestrator | Friday 19 September 2025 11:21:32 +0000 (0:00:00.189) 0:00:00.189 ****** 2025-09-19 11:21:38.018898 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:21:38.018971 | orchestrator | 2025-09-19 11:21:38.018983 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 11:21:38.018994 | orchestrator | Friday 19 September 2025 11:21:32 +0000 (0:00:00.105) 0:00:00.294 ****** 2025-09-19 11:21:38.019005 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:21:38.019016 | orchestrator | 2025-09-19 11:21:38.019026 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 11:21:38.019037 | orchestrator | Friday 19 September 2025 11:21:33 +0000 (0:00:00.860) 0:00:01.155 ****** 2025-09-19 11:21:38.019048 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:21:38.019058 | orchestrator | 2025-09-19 11:21:38.019070 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 11:21:38.019080 | orchestrator | 2025-09-19 11:21:38.019091 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 11:21:38.019101 | orchestrator | Friday 19 September 2025 11:21:33 +0000 (0:00:00.112) 0:00:01.268 ****** 2025-09-19 11:21:38.019112 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:21:38.019122 | orchestrator | 2025-09-19 11:21:38.019133 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 11:21:38.019144 | orchestrator | Friday 19 September 2025 11:21:33 +0000 (0:00:00.106) 0:00:01.374 ****** 2025-09-19 11:21:38.019154 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:21:38.019165 | orchestrator | 2025-09-19 11:21:38.019175 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 11:21:38.019186 | orchestrator | Friday 19 September 2025 11:21:33 +0000 (0:00:00.681) 0:00:02.055 ****** 2025-09-19 11:21:38.019196 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:21:38.019231 | orchestrator | 2025-09-19 11:21:38.019243 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 11:21:38.019253 | orchestrator | 2025-09-19 11:21:38.019264 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 11:21:38.019275 | orchestrator | Friday 19 September 2025 11:21:34 +0000 (0:00:00.119) 0:00:02.175 ****** 2025-09-19 11:21:38.019288 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:21:38.019300 | orchestrator | 2025-09-19 11:21:38.019311 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 11:21:38.019324 | orchestrator | Friday 19 September 2025 11:21:34 +0000 (0:00:00.214) 0:00:02.390 ****** 2025-09-19 11:21:38.019336 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:21:38.019348 | orchestrator | 2025-09-19 11:21:38.019360 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 11:21:38.019372 | orchestrator | Friday 19 September 2025 11:21:34 +0000 (0:00:00.690) 0:00:03.080 ****** 2025-09-19 11:21:38.019384 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:21:38.019397 | orchestrator | 2025-09-19 11:21:38.019409 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 11:21:38.019421 | orchestrator | 2025-09-19 11:21:38.019434 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 11:21:38.019446 | orchestrator | Friday 19 September 2025 11:21:35 +0000 (0:00:00.127) 0:00:03.208 ****** 2025-09-19 11:21:38.019459 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:38.019471 | orchestrator | 2025-09-19 11:21:38.019484 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 11:21:38.019496 | orchestrator | Friday 19 September 2025 11:21:35 +0000 (0:00:00.100) 0:00:03.309 ****** 2025-09-19 11:21:38.019508 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:21:38.019520 | orchestrator | 2025-09-19 11:21:38.019532 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 11:21:38.019545 | orchestrator | Friday 19 September 2025 11:21:35 +0000 (0:00:00.706) 0:00:04.016 ****** 2025-09-19 11:21:38.019557 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:38.019570 | orchestrator | 2025-09-19 11:21:38.019582 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 11:21:38.019592 | orchestrator | 2025-09-19 11:21:38.019603 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 11:21:38.019614 | orchestrator | Friday 19 September 2025 11:21:36 +0000 (0:00:00.117) 0:00:04.133 ****** 2025-09-19 11:21:38.019624 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:38.019635 | orchestrator | 2025-09-19 11:21:38.019645 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 11:21:38.019656 | orchestrator | Friday 19 September 2025 11:21:36 +0000 (0:00:00.107) 0:00:04.240 ****** 2025-09-19 11:21:38.019666 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:21:38.019677 | orchestrator | 2025-09-19 11:21:38.019687 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 11:21:38.019698 | orchestrator | Friday 19 September 2025 11:21:36 +0000 (0:00:00.625) 0:00:04.865 ****** 2025-09-19 11:21:38.019708 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:38.019719 | orchestrator | 2025-09-19 11:21:38.019729 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 11:21:38.019740 | orchestrator | 2025-09-19 11:21:38.019750 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 11:21:38.019761 | orchestrator | Friday 19 September 2025 11:21:36 +0000 (0:00:00.127) 0:00:04.993 ****** 2025-09-19 11:21:38.019772 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:38.019782 | orchestrator | 2025-09-19 11:21:38.019793 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 11:21:38.019803 | orchestrator | Friday 19 September 2025 11:21:36 +0000 (0:00:00.104) 0:00:05.098 ****** 2025-09-19 11:21:38.019814 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:21:38.019824 | orchestrator | 2025-09-19 11:21:38.019835 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 11:21:38.019855 | orchestrator | Friday 19 September 2025 11:21:37 +0000 (0:00:00.681) 0:00:05.780 ****** 2025-09-19 11:21:38.019896 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:38.019926 | orchestrator | 2025-09-19 11:21:38.019938 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:21:38.019949 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:21:38.019961 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:21:38.019972 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:21:38.019982 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:21:38.019993 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:21:38.020003 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:21:38.020014 | orchestrator | 2025-09-19 11:21:38.020024 | orchestrator | 2025-09-19 11:21:38.020035 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:21:38.020046 | orchestrator | Friday 19 September 2025 11:21:37 +0000 (0:00:00.039) 0:00:05.819 ****** 2025-09-19 11:21:38.020056 | orchestrator | =============================================================================== 2025-09-19 11:21:38.020067 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.25s 2025-09-19 11:21:38.020082 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.74s 2025-09-19 11:21:38.020093 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.64s 2025-09-19 11:21:38.312430 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-19 11:21:50.395977 | orchestrator | 2025-09-19 11:21:50 | INFO  | Task 3b5bfa74-0aea-4d5c-8f77-e69210a4adfc (wait-for-connection) was prepared for execution. 2025-09-19 11:21:50.396085 | orchestrator | 2025-09-19 11:21:50 | INFO  | It takes a moment until task 3b5bfa74-0aea-4d5c-8f77-e69210a4adfc (wait-for-connection) has been started and output is visible here. 2025-09-19 11:22:06.192791 | orchestrator | 2025-09-19 11:22:06.193015 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-19 11:22:06.193044 | orchestrator | 2025-09-19 11:22:06.193056 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-19 11:22:06.193068 | orchestrator | Friday 19 September 2025 11:21:54 +0000 (0:00:00.239) 0:00:00.239 ****** 2025-09-19 11:22:06.193079 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:22:06.193090 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:22:06.193101 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:22:06.193111 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:22:06.193122 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:22:06.193132 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:22:06.193143 | orchestrator | 2025-09-19 11:22:06.193154 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:22:06.193169 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:22:06.193189 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:22:06.193208 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:22:06.193260 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:22:06.193281 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:22:06.193318 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:22:06.193354 | orchestrator | 2025-09-19 11:22:06.193376 | orchestrator | 2025-09-19 11:22:06.193396 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:22:06.193418 | orchestrator | Friday 19 September 2025 11:22:05 +0000 (0:00:11.560) 0:00:11.799 ****** 2025-09-19 11:22:06.193439 | orchestrator | =============================================================================== 2025-09-19 11:22:06.193459 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.56s 2025-09-19 11:22:06.388398 | orchestrator | + osism apply hddtemp 2025-09-19 11:22:18.309299 | orchestrator | 2025-09-19 11:22:18 | INFO  | Task cf9d52ee-3c7e-4f8f-a3b5-63d06b4e66b8 (hddtemp) was prepared for execution. 2025-09-19 11:22:18.309405 | orchestrator | 2025-09-19 11:22:18 | INFO  | It takes a moment until task cf9d52ee-3c7e-4f8f-a3b5-63d06b4e66b8 (hddtemp) has been started and output is visible here. 2025-09-19 11:22:45.533643 | orchestrator | 2025-09-19 11:22:45.533730 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-19 11:22:45.533741 | orchestrator | 2025-09-19 11:22:45.533765 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-19 11:22:45.533772 | orchestrator | Friday 19 September 2025 11:22:21 +0000 (0:00:00.199) 0:00:00.199 ****** 2025-09-19 11:22:45.533780 | orchestrator | ok: [testbed-manager] 2025-09-19 11:22:45.533788 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:22:45.533795 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:22:45.533801 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:22:45.533808 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:22:45.533815 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:22:45.533821 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:22:45.533828 | orchestrator | 2025-09-19 11:22:45.533835 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-19 11:22:45.533841 | orchestrator | Friday 19 September 2025 11:22:21 +0000 (0:00:00.501) 0:00:00.701 ****** 2025-09-19 11:22:45.533913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:22:45.533924 | orchestrator | 2025-09-19 11:22:45.533931 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-19 11:22:45.533938 | orchestrator | Friday 19 September 2025 11:22:22 +0000 (0:00:01.004) 0:00:01.706 ****** 2025-09-19 11:22:45.533944 | orchestrator | ok: [testbed-manager] 2025-09-19 11:22:45.533951 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:22:45.533957 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:22:45.533964 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:22:45.533970 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:22:45.533977 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:22:45.533983 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:22:45.533990 | orchestrator | 2025-09-19 11:22:45.533996 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-19 11:22:45.534003 | orchestrator | Friday 19 September 2025 11:22:24 +0000 (0:00:01.983) 0:00:03.690 ****** 2025-09-19 11:22:45.534010 | orchestrator | changed: [testbed-manager] 2025-09-19 11:22:45.534062 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:22:45.534069 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:22:45.534076 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:22:45.534082 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:22:45.534106 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:22:45.534113 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:22:45.534120 | orchestrator | 2025-09-19 11:22:45.534126 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-19 11:22:45.534133 | orchestrator | Friday 19 September 2025 11:22:26 +0000 (0:00:01.183) 0:00:04.873 ****** 2025-09-19 11:22:45.534140 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:22:45.534146 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:22:45.534153 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:22:45.534159 | orchestrator | ok: [testbed-manager] 2025-09-19 11:22:45.534166 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:22:45.534172 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:22:45.534179 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:22:45.534185 | orchestrator | 2025-09-19 11:22:45.534192 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-19 11:22:45.534198 | orchestrator | Friday 19 September 2025 11:22:27 +0000 (0:00:01.794) 0:00:06.667 ****** 2025-09-19 11:22:45.534205 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:22:45.534213 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:22:45.534220 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:22:45.534228 | orchestrator | changed: [testbed-manager] 2025-09-19 11:22:45.534235 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:45.534243 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:45.534250 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:22:45.534257 | orchestrator | 2025-09-19 11:22:45.534265 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-19 11:22:45.534272 | orchestrator | Friday 19 September 2025 11:22:28 +0000 (0:00:00.805) 0:00:07.473 ****** 2025-09-19 11:22:45.534280 | orchestrator | changed: [testbed-manager] 2025-09-19 11:22:45.534287 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:22:45.534293 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:22:45.534300 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:22:45.534306 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:22:45.534312 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:22:45.534321 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:22:45.534332 | orchestrator | 2025-09-19 11:22:45.534343 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-19 11:22:45.534353 | orchestrator | Friday 19 September 2025 11:22:41 +0000 (0:00:13.225) 0:00:20.699 ****** 2025-09-19 11:22:45.534364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:22:45.534374 | orchestrator | 2025-09-19 11:22:45.534386 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-19 11:22:45.534397 | orchestrator | Friday 19 September 2025 11:22:43 +0000 (0:00:01.385) 0:00:22.085 ****** 2025-09-19 11:22:45.534407 | orchestrator | changed: [testbed-manager] 2025-09-19 11:22:45.534418 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:22:45.534429 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:22:45.534440 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:22:45.534451 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:22:45.534462 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:22:45.534473 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:22:45.534482 | orchestrator | 2025-09-19 11:22:45.534492 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:22:45.534503 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:22:45.534533 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:22:45.534554 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:22:45.534576 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:22:45.534587 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:22:45.534597 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:22:45.534608 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:22:45.534618 | orchestrator | 2025-09-19 11:22:45.534628 | orchestrator | 2025-09-19 11:22:45.534640 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:22:45.534650 | orchestrator | Friday 19 September 2025 11:22:45 +0000 (0:00:01.867) 0:00:23.953 ****** 2025-09-19 11:22:45.534661 | orchestrator | =============================================================================== 2025-09-19 11:22:45.534671 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.23s 2025-09-19 11:22:45.534682 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.98s 2025-09-19 11:22:45.534693 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.87s 2025-09-19 11:22:45.534704 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.79s 2025-09-19 11:22:45.534714 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.39s 2025-09-19 11:22:45.534724 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.18s 2025-09-19 11:22:45.534735 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.00s 2025-09-19 11:22:45.534746 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.81s 2025-09-19 11:22:45.534757 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.50s 2025-09-19 11:22:45.846115 | orchestrator | ++ semver latest 7.1.1 2025-09-19 11:22:45.908221 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-19 11:22:45.908308 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-19 11:22:45.908323 | orchestrator | + sudo systemctl restart manager.service 2025-09-19 11:22:59.778460 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-19 11:22:59.778570 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-19 11:22:59.778585 | orchestrator | + local max_attempts=60 2025-09-19 11:22:59.778597 | orchestrator | + local name=ceph-ansible 2025-09-19 11:22:59.778608 | orchestrator | + local attempt_num=1 2025-09-19 11:22:59.778619 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:22:59.818975 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:22:59.819088 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:22:59.819107 | orchestrator | + sleep 5 2025-09-19 11:23:04.824063 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:23:04.870548 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:23:04.870639 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:23:04.870653 | orchestrator | + sleep 5 2025-09-19 11:23:09.873809 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:23:09.915722 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:23:09.915788 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:23:09.915802 | orchestrator | + sleep 5 2025-09-19 11:23:14.920016 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:23:14.961516 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:23:14.961607 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:23:14.961621 | orchestrator | + sleep 5 2025-09-19 11:23:19.966413 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:23:20.003632 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:23:20.003702 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:23:20.003737 | orchestrator | + sleep 5 2025-09-19 11:23:25.008521 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:23:25.050264 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:23:25.050383 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:23:25.050408 | orchestrator | + sleep 5 2025-09-19 11:23:30.057145 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:23:30.089289 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:23:30.089390 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:23:30.089405 | orchestrator | + sleep 5 2025-09-19 11:23:35.097098 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:23:35.127456 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 11:23:35.127552 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:23:35.127568 | orchestrator | + sleep 5 2025-09-19 11:23:40.131759 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:23:40.169734 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 11:23:40.169793 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:23:40.169812 | orchestrator | + sleep 5 2025-09-19 11:23:45.173537 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:23:45.209545 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 11:23:45.209585 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:23:45.209590 | orchestrator | + sleep 5 2025-09-19 11:23:50.213957 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:23:50.251088 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 11:23:50.251183 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:23:50.251196 | orchestrator | + sleep 5 2025-09-19 11:23:55.256166 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:23:55.295924 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 11:23:55.296034 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:23:55.296052 | orchestrator | + sleep 5 2025-09-19 11:24:00.300562 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:24:00.336278 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 11:24:00.336359 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:24:00.336371 | orchestrator | + sleep 5 2025-09-19 11:24:05.341714 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:24:05.385715 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:24:05.385861 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-19 11:24:05.385878 | orchestrator | + local max_attempts=60 2025-09-19 11:24:05.385891 | orchestrator | + local name=kolla-ansible 2025-09-19 11:24:05.385902 | orchestrator | + local attempt_num=1 2025-09-19 11:24:05.386797 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-19 11:24:05.429819 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:24:05.429894 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-19 11:24:05.429908 | orchestrator | + local max_attempts=60 2025-09-19 11:24:05.429919 | orchestrator | + local name=osism-ansible 2025-09-19 11:24:05.429931 | orchestrator | + local attempt_num=1 2025-09-19 11:24:05.430428 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-19 11:24:05.470577 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:24:05.470654 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-19 11:24:05.470669 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-19 11:24:05.652656 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-19 11:24:05.827627 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-19 11:24:05.980015 | orchestrator | ARA in osism-ansible already disabled. 2025-09-19 11:24:06.153087 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-19 11:24:06.154196 | orchestrator | + osism apply gather-facts 2025-09-19 11:24:18.272030 | orchestrator | 2025-09-19 11:24:18 | INFO  | Task 0e9b28ae-7cb1-4043-8561-a0f85d06fe1c (gather-facts) was prepared for execution. 2025-09-19 11:24:18.272140 | orchestrator | 2025-09-19 11:24:18 | INFO  | It takes a moment until task 0e9b28ae-7cb1-4043-8561-a0f85d06fe1c (gather-facts) has been started and output is visible here. 2025-09-19 11:24:30.210592 | orchestrator | 2025-09-19 11:24:30.210729 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 11:24:30.210840 | orchestrator | 2025-09-19 11:24:30.210855 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 11:24:30.210866 | orchestrator | Friday 19 September 2025 11:24:21 +0000 (0:00:00.164) 0:00:00.164 ****** 2025-09-19 11:24:30.210878 | orchestrator | ok: [testbed-manager] 2025-09-19 11:24:30.210889 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:24:30.210901 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:24:30.210911 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:24:30.210922 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:24:30.210932 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:24:30.210943 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:24:30.210953 | orchestrator | 2025-09-19 11:24:30.210964 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 11:24:30.210975 | orchestrator | 2025-09-19 11:24:30.210986 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 11:24:30.210996 | orchestrator | Friday 19 September 2025 11:24:29 +0000 (0:00:07.733) 0:00:07.898 ****** 2025-09-19 11:24:30.211007 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:24:30.211019 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:24:30.211029 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:24:30.211040 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:24:30.211050 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:24:30.211061 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:24:30.211071 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:24:30.211082 | orchestrator | 2025-09-19 11:24:30.211093 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:24:30.211104 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:24:30.211118 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:24:30.211131 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:24:30.211143 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:24:30.211155 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:24:30.211167 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:24:30.211180 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:24:30.211192 | orchestrator | 2025-09-19 11:24:30.211205 | orchestrator | 2025-09-19 11:24:30.211217 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:24:30.211229 | orchestrator | Friday 19 September 2025 11:24:29 +0000 (0:00:00.496) 0:00:08.394 ****** 2025-09-19 11:24:30.211241 | orchestrator | =============================================================================== 2025-09-19 11:24:30.211253 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.73s 2025-09-19 11:24:30.211266 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-09-19 11:24:30.501968 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-19 11:24:30.519050 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-19 11:24:30.536416 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-19 11:24:30.550527 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-19 11:24:30.567199 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-19 11:24:30.582411 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-19 11:24:30.596450 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-19 11:24:30.610449 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-19 11:24:30.630829 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-19 11:24:30.648039 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-19 11:24:30.664709 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-19 11:24:30.683647 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-19 11:24:30.696920 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-19 11:24:30.709091 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-19 11:24:30.723391 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-19 11:24:30.742318 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-19 11:24:30.757295 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-19 11:24:30.771597 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-19 11:24:30.788798 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-19 11:24:30.804908 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-19 11:24:30.815478 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-19 11:24:31.227146 | orchestrator | ok: Runtime: 0:23:07.456077 2025-09-19 11:24:31.324044 | 2025-09-19 11:24:31.324166 | TASK [Deploy services] 2025-09-19 11:24:31.856237 | orchestrator | skipping: Conditional result was False 2025-09-19 11:24:31.866932 | 2025-09-19 11:24:31.867072 | TASK [Deploy in a nutshell] 2025-09-19 11:24:32.576665 | orchestrator | 2025-09-19 11:24:32.576789 | orchestrator | # PULL IMAGES 2025-09-19 11:24:32.576799 | orchestrator | 2025-09-19 11:24:32.576804 | orchestrator | + set -e 2025-09-19 11:24:32.576811 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 11:24:32.576820 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 11:24:32.576827 | orchestrator | ++ INTERACTIVE=false 2025-09-19 11:24:32.576848 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 11:24:32.576858 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 11:24:32.576864 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 11:24:32.576868 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 11:24:32.576875 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 11:24:32.576880 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 11:24:32.576887 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 11:24:32.576891 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 11:24:32.576898 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 11:24:32.576902 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 11:24:32.576908 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 11:24:32.576913 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 11:24:32.576917 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 11:24:32.576921 | orchestrator | ++ export ARA=false 2025-09-19 11:24:32.576925 | orchestrator | ++ ARA=false 2025-09-19 11:24:32.576929 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 11:24:32.576933 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 11:24:32.576936 | orchestrator | ++ export TEMPEST=false 2025-09-19 11:24:32.576940 | orchestrator | ++ TEMPEST=false 2025-09-19 11:24:32.576944 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 11:24:32.576948 | orchestrator | ++ IS_ZUUL=true 2025-09-19 11:24:32.576952 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.121 2025-09-19 11:24:32.576956 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.121 2025-09-19 11:24:32.576960 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 11:24:32.576964 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 11:24:32.576968 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 11:24:32.576972 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 11:24:32.576975 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 11:24:32.576979 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 11:24:32.576983 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 11:24:32.576987 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 11:24:32.576991 | orchestrator | + echo 2025-09-19 11:24:32.576998 | orchestrator | + echo '# PULL IMAGES' 2025-09-19 11:24:32.577002 | orchestrator | + echo 2025-09-19 11:24:32.577868 | orchestrator | ++ semver latest 7.0.0 2025-09-19 11:24:32.614549 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-19 11:24:32.614609 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-19 11:24:32.614615 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-19 11:24:34.467418 | orchestrator | 2025-09-19 11:24:34 | INFO  | Trying to run play pull-images in environment custom 2025-09-19 11:24:44.597082 | orchestrator | 2025-09-19 11:24:44 | INFO  | Task 3e13c3e6-710f-4065-a308-9080c2e66b9c (pull-images) was prepared for execution. 2025-09-19 11:24:44.597206 | orchestrator | 2025-09-19 11:24:44 | INFO  | Task 3e13c3e6-710f-4065-a308-9080c2e66b9c is running in background. No more output. Check ARA for logs. 2025-09-19 11:24:47.008981 | orchestrator | 2025-09-19 11:24:47 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-19 11:24:57.190623 | orchestrator | 2025-09-19 11:24:57 | INFO  | Task bb9e7c27-1901-4163-9db0-7d585e0b4c21 (wipe-partitions) was prepared for execution. 2025-09-19 11:24:57.190725 | orchestrator | 2025-09-19 11:24:57 | INFO  | It takes a moment until task bb9e7c27-1901-4163-9db0-7d585e0b4c21 (wipe-partitions) has been started and output is visible here. 2025-09-19 11:25:09.608570 | orchestrator | 2025-09-19 11:25:09.608768 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-19 11:25:09.608787 | orchestrator | 2025-09-19 11:25:09.608798 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-19 11:25:09.608815 | orchestrator | Friday 19 September 2025 11:25:00 +0000 (0:00:00.122) 0:00:00.122 ****** 2025-09-19 11:25:09.608825 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:25:09.608836 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:25:09.608846 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:25:09.608857 | orchestrator | 2025-09-19 11:25:09.608867 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-19 11:25:09.608898 | orchestrator | Friday 19 September 2025 11:25:01 +0000 (0:00:00.545) 0:00:00.668 ****** 2025-09-19 11:25:09.608908 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:09.608918 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:25:09.608931 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:25:09.608941 | orchestrator | 2025-09-19 11:25:09.608951 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-19 11:25:09.608961 | orchestrator | Friday 19 September 2025 11:25:01 +0000 (0:00:00.205) 0:00:00.874 ****** 2025-09-19 11:25:09.608971 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:25:09.608981 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:25:09.608991 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:25:09.609000 | orchestrator | 2025-09-19 11:25:09.609010 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-19 11:25:09.609020 | orchestrator | Friday 19 September 2025 11:25:02 +0000 (0:00:00.611) 0:00:01.485 ****** 2025-09-19 11:25:09.609030 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:09.609039 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:25:09.609049 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:25:09.609059 | orchestrator | 2025-09-19 11:25:09.609068 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-19 11:25:09.609078 | orchestrator | Friday 19 September 2025 11:25:02 +0000 (0:00:00.229) 0:00:01.714 ****** 2025-09-19 11:25:09.609088 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 11:25:09.609101 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 11:25:09.609112 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 11:25:09.609124 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 11:25:09.609135 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 11:25:09.609146 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 11:25:09.609157 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 11:25:09.609168 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 11:25:09.609179 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 11:25:09.609190 | orchestrator | 2025-09-19 11:25:09.609201 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-19 11:25:09.609213 | orchestrator | Friday 19 September 2025 11:25:03 +0000 (0:00:01.239) 0:00:02.954 ****** 2025-09-19 11:25:09.609225 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 11:25:09.609236 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 11:25:09.609247 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 11:25:09.609258 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 11:25:09.609269 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 11:25:09.609280 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 11:25:09.609291 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 11:25:09.609302 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 11:25:09.609313 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 11:25:09.609324 | orchestrator | 2025-09-19 11:25:09.609336 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-19 11:25:09.609347 | orchestrator | Friday 19 September 2025 11:25:05 +0000 (0:00:01.405) 0:00:04.360 ****** 2025-09-19 11:25:09.609356 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 11:25:09.609366 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 11:25:09.609375 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 11:25:09.609385 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 11:25:09.609395 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 11:25:09.609404 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 11:25:09.609414 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 11:25:09.609431 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 11:25:09.609446 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 11:25:09.609456 | orchestrator | 2025-09-19 11:25:09.609466 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-19 11:25:09.609476 | orchestrator | Friday 19 September 2025 11:25:07 +0000 (0:00:02.921) 0:00:07.281 ****** 2025-09-19 11:25:09.609485 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:25:09.609495 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:25:09.609505 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:25:09.609514 | orchestrator | 2025-09-19 11:25:09.609524 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-19 11:25:09.609533 | orchestrator | Friday 19 September 2025 11:25:08 +0000 (0:00:00.614) 0:00:07.896 ****** 2025-09-19 11:25:09.609543 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:25:09.609553 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:25:09.609562 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:25:09.609572 | orchestrator | 2025-09-19 11:25:09.609581 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:25:09.609593 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:25:09.609604 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:25:09.609628 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:25:09.609638 | orchestrator | 2025-09-19 11:25:09.609648 | orchestrator | 2025-09-19 11:25:09.609657 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:25:09.609667 | orchestrator | Friday 19 September 2025 11:25:09 +0000 (0:00:00.685) 0:00:08.581 ****** 2025-09-19 11:25:09.609677 | orchestrator | =============================================================================== 2025-09-19 11:25:09.609686 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.92s 2025-09-19 11:25:09.609696 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.41s 2025-09-19 11:25:09.609705 | orchestrator | Check device availability ----------------------------------------------- 1.24s 2025-09-19 11:25:09.609732 | orchestrator | Request device events from the kernel ----------------------------------- 0.69s 2025-09-19 11:25:09.609743 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2025-09-19 11:25:09.609753 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.61s 2025-09-19 11:25:09.609763 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.55s 2025-09-19 11:25:09.609772 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2025-09-19 11:25:09.609782 | orchestrator | Remove all rook related logical devices --------------------------------- 0.21s 2025-09-19 11:25:21.601189 | orchestrator | 2025-09-19 11:25:21 | INFO  | Task a6abd9af-36cc-4d1f-a034-b280d9075410 (facts) was prepared for execution. 2025-09-19 11:25:21.601277 | orchestrator | 2025-09-19 11:25:21 | INFO  | It takes a moment until task a6abd9af-36cc-4d1f-a034-b280d9075410 (facts) has been started and output is visible here. 2025-09-19 11:25:32.983011 | orchestrator | 2025-09-19 11:25:32.983163 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-19 11:25:32.983191 | orchestrator | 2025-09-19 11:25:32.983210 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 11:25:32.983228 | orchestrator | Friday 19 September 2025 11:25:25 +0000 (0:00:00.239) 0:00:00.239 ****** 2025-09-19 11:25:32.983246 | orchestrator | ok: [testbed-manager] 2025-09-19 11:25:32.983265 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:25:32.983282 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:25:32.983327 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:25:32.983347 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:25:32.983365 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:25:32.983383 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:25:32.983402 | orchestrator | 2025-09-19 11:25:32.983421 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 11:25:32.983439 | orchestrator | Friday 19 September 2025 11:25:26 +0000 (0:00:00.981) 0:00:01.220 ****** 2025-09-19 11:25:32.983456 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:25:32.983468 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:25:32.983479 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:25:32.983490 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:25:32.983501 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:32.983512 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:25:32.983523 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:25:32.983534 | orchestrator | 2025-09-19 11:25:32.983545 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 11:25:32.983556 | orchestrator | 2025-09-19 11:25:32.983582 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 11:25:32.983596 | orchestrator | Friday 19 September 2025 11:25:27 +0000 (0:00:01.087) 0:00:02.307 ****** 2025-09-19 11:25:32.983608 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:25:32.983620 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:25:32.983633 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:25:32.983646 | orchestrator | ok: [testbed-manager] 2025-09-19 11:25:32.983658 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:25:32.983670 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:25:32.983682 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:25:32.983721 | orchestrator | 2025-09-19 11:25:32.983734 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 11:25:32.983746 | orchestrator | 2025-09-19 11:25:32.983758 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 11:25:32.983771 | orchestrator | Friday 19 September 2025 11:25:32 +0000 (0:00:04.867) 0:00:07.175 ****** 2025-09-19 11:25:32.983783 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:25:32.983795 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:25:32.983808 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:25:32.983820 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:25:32.983832 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:32.983844 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:25:32.983856 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:25:32.983867 | orchestrator | 2025-09-19 11:25:32.983879 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:25:32.983892 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:25:32.983906 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:25:32.983919 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:25:32.983931 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:25:32.983942 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:25:32.983953 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:25:32.983964 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:25:32.983974 | orchestrator | 2025-09-19 11:25:32.983995 | orchestrator | 2025-09-19 11:25:32.984006 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:25:32.984017 | orchestrator | Friday 19 September 2025 11:25:32 +0000 (0:00:00.575) 0:00:07.751 ****** 2025-09-19 11:25:32.984027 | orchestrator | =============================================================================== 2025-09-19 11:25:32.984038 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.87s 2025-09-19 11:25:32.984049 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.09s 2025-09-19 11:25:32.984059 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.98s 2025-09-19 11:25:32.984070 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2025-09-19 11:25:34.957879 | orchestrator | 2025-09-19 11:25:34 | INFO  | Task d7669762-b0b9-4e26-a7d9-0c88c7085cf3 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-19 11:25:34.957962 | orchestrator | 2025-09-19 11:25:34 | INFO  | It takes a moment until task d7669762-b0b9-4e26-a7d9-0c88c7085cf3 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-19 11:25:45.265485 | orchestrator | 2025-09-19 11:25:45.265629 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 11:25:45.265657 | orchestrator | 2025-09-19 11:25:45.265669 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 11:25:45.265718 | orchestrator | Friday 19 September 2025 11:25:38 +0000 (0:00:00.243) 0:00:00.243 ****** 2025-09-19 11:25:45.265731 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 11:25:45.265743 | orchestrator | 2025-09-19 11:25:45.265754 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 11:25:45.265765 | orchestrator | Friday 19 September 2025 11:25:38 +0000 (0:00:00.240) 0:00:00.484 ****** 2025-09-19 11:25:45.265776 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:25:45.265788 | orchestrator | 2025-09-19 11:25:45.265799 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:25:45.265811 | orchestrator | Friday 19 September 2025 11:25:38 +0000 (0:00:00.203) 0:00:00.687 ****** 2025-09-19 11:25:45.265822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-19 11:25:45.265833 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-19 11:25:45.265844 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-19 11:25:45.265867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-19 11:25:45.265879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-19 11:25:45.265889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-19 11:25:45.265900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-19 11:25:45.265911 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-19 11:25:45.265921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-19 11:25:45.265932 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-19 11:25:45.265943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-19 11:25:45.265954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-19 11:25:45.265964 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-19 11:25:45.265975 | orchestrator | 2025-09-19 11:25:45.265986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:25:45.265999 | orchestrator | Friday 19 September 2025 11:25:39 +0000 (0:00:00.286) 0:00:00.974 ****** 2025-09-19 11:25:45.266011 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:45.266103 | orchestrator | 2025-09-19 11:25:45.266115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:25:45.266128 | orchestrator | Friday 19 September 2025 11:25:39 +0000 (0:00:00.379) 0:00:01.354 ****** 2025-09-19 11:25:45.266140 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:45.266152 | orchestrator | 2025-09-19 11:25:45.266164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:25:45.266177 | orchestrator | Friday 19 September 2025 11:25:39 +0000 (0:00:00.182) 0:00:01.537 ****** 2025-09-19 11:25:45.266188 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:45.266201 | orchestrator | 2025-09-19 11:25:45.266213 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:25:45.266225 | orchestrator | Friday 19 September 2025 11:25:39 +0000 (0:00:00.181) 0:00:01.718 ****** 2025-09-19 11:25:45.266238 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:45.266255 | orchestrator | 2025-09-19 11:25:45.266267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:25:45.266279 | orchestrator | Friday 19 September 2025 11:25:40 +0000 (0:00:00.175) 0:00:01.894 ****** 2025-09-19 11:25:45.266291 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:45.266303 | orchestrator | 2025-09-19 11:25:45.266316 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:25:45.266328 | orchestrator | Friday 19 September 2025 11:25:40 +0000 (0:00:00.180) 0:00:02.075 ****** 2025-09-19 11:25:45.266339 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:45.266350 | orchestrator | 2025-09-19 11:25:45.266360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:25:45.266371 | orchestrator | Friday 19 September 2025 11:25:40 +0000 (0:00:00.176) 0:00:02.251 ****** 2025-09-19 11:25:45.266382 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:45.266393 | orchestrator | 2025-09-19 11:25:45.266403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:25:45.266414 | orchestrator | Friday 19 September 2025 11:25:40 +0000 (0:00:00.200) 0:00:02.452 ****** 2025-09-19 11:25:45.266425 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:45.266435 | orchestrator | 2025-09-19 11:25:45.266446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:25:45.266457 | orchestrator | Friday 19 September 2025 11:25:40 +0000 (0:00:00.187) 0:00:02.640 ****** 2025-09-19 11:25:45.266468 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0) 2025-09-19 11:25:45.266481 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0) 2025-09-19 11:25:45.266491 | orchestrator | 2025-09-19 11:25:45.266502 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:25:45.266513 | orchestrator | Friday 19 September 2025 11:25:41 +0000 (0:00:00.428) 0:00:03.068 ****** 2025-09-19 11:25:45.266546 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_adddc9ff-e41b-477e-a261-fe5fa77d3a0f) 2025-09-19 11:25:45.266557 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_adddc9ff-e41b-477e-a261-fe5fa77d3a0f) 2025-09-19 11:25:45.266568 | orchestrator | 2025-09-19 11:25:45.266579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:25:45.266590 | orchestrator | Friday 19 September 2025 11:25:41 +0000 (0:00:00.377) 0:00:03.446 ****** 2025-09-19 11:25:45.266607 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_93b11a5e-f517-4b3c-9813-3ed2f0fa6238) 2025-09-19 11:25:45.266618 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_93b11a5e-f517-4b3c-9813-3ed2f0fa6238) 2025-09-19 11:25:45.266629 | orchestrator | 2025-09-19 11:25:45.266640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:25:45.266651 | orchestrator | Friday 19 September 2025 11:25:42 +0000 (0:00:00.548) 0:00:03.994 ****** 2025-09-19 11:25:45.266662 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_53ba9bad-d72e-4bb6-9573-8eecfdb7d8b6) 2025-09-19 11:25:45.266882 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_53ba9bad-d72e-4bb6-9573-8eecfdb7d8b6) 2025-09-19 11:25:45.266901 | orchestrator | 2025-09-19 11:25:45.266912 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:25:45.266923 | orchestrator | Friday 19 September 2025 11:25:42 +0000 (0:00:00.524) 0:00:04.519 ****** 2025-09-19 11:25:45.266934 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 11:25:45.266946 | orchestrator | 2025-09-19 11:25:45.266957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:25:45.266968 | orchestrator | Friday 19 September 2025 11:25:43 +0000 (0:00:00.576) 0:00:05.096 ****** 2025-09-19 11:25:45.266979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-19 11:25:45.266990 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-19 11:25:45.267002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-19 11:25:45.267013 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-19 11:25:45.267024 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-19 11:25:45.267035 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-19 11:25:45.267046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-19 11:25:45.267057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-19 11:25:45.267068 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-19 11:25:45.267079 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-19 11:25:45.267090 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-19 11:25:45.267102 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-19 11:25:45.267113 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-19 11:25:45.267124 | orchestrator | 2025-09-19 11:25:45.267136 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:25:45.267147 | orchestrator | Friday 19 September 2025 11:25:43 +0000 (0:00:00.366) 0:00:05.463 ****** 2025-09-19 11:25:45.267158 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:45.267170 | orchestrator | 2025-09-19 11:25:45.267181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:25:45.267192 | orchestrator | Friday 19 September 2025 11:25:43 +0000 (0:00:00.172) 0:00:05.635 ****** 2025-09-19 11:25:45.267203 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:45.267214 | orchestrator | 2025-09-19 11:25:45.267226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:25:45.267237 | orchestrator | Friday 19 September 2025 11:25:44 +0000 (0:00:00.215) 0:00:05.850 ****** 2025-09-19 11:25:45.267248 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:45.267259 | orchestrator | 2025-09-19 11:25:45.267270 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:25:45.267281 | orchestrator | Friday 19 September 2025 11:25:44 +0000 (0:00:00.182) 0:00:06.033 ****** 2025-09-19 11:25:45.267292 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:45.267303 | orchestrator | 2025-09-19 11:25:45.267315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:25:45.267326 | orchestrator | Friday 19 September 2025 11:25:44 +0000 (0:00:00.179) 0:00:06.212 ****** 2025-09-19 11:25:45.267337 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:45.267348 | orchestrator | 2025-09-19 11:25:45.267369 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:25:45.267380 | orchestrator | Friday 19 September 2025 11:25:44 +0000 (0:00:00.189) 0:00:06.402 ****** 2025-09-19 11:25:45.267391 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:45.267402 | orchestrator | 2025-09-19 11:25:45.267414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:25:45.267425 | orchestrator | Friday 19 September 2025 11:25:44 +0000 (0:00:00.208) 0:00:06.611 ****** 2025-09-19 11:25:45.267436 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:45.267447 | orchestrator | 2025-09-19 11:25:45.267458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:25:45.267469 | orchestrator | Friday 19 September 2025 11:25:45 +0000 (0:00:00.204) 0:00:06.816 ****** 2025-09-19 11:25:45.267492 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:53.029299 | orchestrator | 2025-09-19 11:25:53.029436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:25:53.029460 | orchestrator | Friday 19 September 2025 11:25:45 +0000 (0:00:00.194) 0:00:07.010 ****** 2025-09-19 11:25:53.029473 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-19 11:25:53.029486 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-19 11:25:53.029497 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-19 11:25:53.029508 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-19 11:25:53.029519 | orchestrator | 2025-09-19 11:25:53.029531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:25:53.029542 | orchestrator | Friday 19 September 2025 11:25:46 +0000 (0:00:01.043) 0:00:08.053 ****** 2025-09-19 11:25:53.029573 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:53.029584 | orchestrator | 2025-09-19 11:25:53.029596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:25:53.029607 | orchestrator | Friday 19 September 2025 11:25:46 +0000 (0:00:00.201) 0:00:08.255 ****** 2025-09-19 11:25:53.029618 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:53.029629 | orchestrator | 2025-09-19 11:25:53.029640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:25:53.029651 | orchestrator | Friday 19 September 2025 11:25:46 +0000 (0:00:00.204) 0:00:08.459 ****** 2025-09-19 11:25:53.029662 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:53.029700 | orchestrator | 2025-09-19 11:25:53.029712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:25:53.029723 | orchestrator | Friday 19 September 2025 11:25:46 +0000 (0:00:00.220) 0:00:08.680 ****** 2025-09-19 11:25:53.029734 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:53.029745 | orchestrator | 2025-09-19 11:25:53.029756 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 11:25:53.029767 | orchestrator | Friday 19 September 2025 11:25:47 +0000 (0:00:00.220) 0:00:08.900 ****** 2025-09-19 11:25:53.029778 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-19 11:25:53.029789 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-19 11:25:53.029800 | orchestrator | 2025-09-19 11:25:53.029812 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 11:25:53.029825 | orchestrator | Friday 19 September 2025 11:25:47 +0000 (0:00:00.185) 0:00:09.086 ****** 2025-09-19 11:25:53.029838 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:53.029850 | orchestrator | 2025-09-19 11:25:53.029862 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 11:25:53.029875 | orchestrator | Friday 19 September 2025 11:25:47 +0000 (0:00:00.141) 0:00:09.227 ****** 2025-09-19 11:25:53.029887 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:53.029900 | orchestrator | 2025-09-19 11:25:53.029913 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 11:25:53.029925 | orchestrator | Friday 19 September 2025 11:25:47 +0000 (0:00:00.141) 0:00:09.368 ****** 2025-09-19 11:25:53.029937 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:53.029972 | orchestrator | 2025-09-19 11:25:53.029986 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 11:25:53.029999 | orchestrator | Friday 19 September 2025 11:25:47 +0000 (0:00:00.126) 0:00:09.495 ****** 2025-09-19 11:25:53.030060 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:25:53.030075 | orchestrator | 2025-09-19 11:25:53.030088 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 11:25:53.030100 | orchestrator | Friday 19 September 2025 11:25:47 +0000 (0:00:00.140) 0:00:09.636 ****** 2025-09-19 11:25:53.030113 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c75d7215-6866-5647-89df-878c4666c32d'}}) 2025-09-19 11:25:53.030125 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'}}) 2025-09-19 11:25:53.030136 | orchestrator | 2025-09-19 11:25:53.030147 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 11:25:53.030158 | orchestrator | Friday 19 September 2025 11:25:48 +0000 (0:00:00.174) 0:00:09.810 ****** 2025-09-19 11:25:53.030169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c75d7215-6866-5647-89df-878c4666c32d'}})  2025-09-19 11:25:53.030190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'}})  2025-09-19 11:25:53.030201 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:53.030212 | orchestrator | 2025-09-19 11:25:53.030223 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 11:25:53.030234 | orchestrator | Friday 19 September 2025 11:25:48 +0000 (0:00:00.157) 0:00:09.967 ****** 2025-09-19 11:25:53.030245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c75d7215-6866-5647-89df-878c4666c32d'}})  2025-09-19 11:25:53.030256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'}})  2025-09-19 11:25:53.030267 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:53.030278 | orchestrator | 2025-09-19 11:25:53.030289 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 11:25:53.030300 | orchestrator | Friday 19 September 2025 11:25:48 +0000 (0:00:00.343) 0:00:10.311 ****** 2025-09-19 11:25:53.030311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c75d7215-6866-5647-89df-878c4666c32d'}})  2025-09-19 11:25:53.030322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'}})  2025-09-19 11:25:53.030333 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:53.030344 | orchestrator | 2025-09-19 11:25:53.030377 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 11:25:53.030389 | orchestrator | Friday 19 September 2025 11:25:48 +0000 (0:00:00.161) 0:00:10.472 ****** 2025-09-19 11:25:53.030400 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:25:53.030411 | orchestrator | 2025-09-19 11:25:53.030422 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 11:25:53.030432 | orchestrator | Friday 19 September 2025 11:25:48 +0000 (0:00:00.148) 0:00:10.621 ****** 2025-09-19 11:25:53.030443 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:25:53.030454 | orchestrator | 2025-09-19 11:25:53.030465 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 11:25:53.030476 | orchestrator | Friday 19 September 2025 11:25:49 +0000 (0:00:00.155) 0:00:10.776 ****** 2025-09-19 11:25:53.030487 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:53.030498 | orchestrator | 2025-09-19 11:25:53.030509 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 11:25:53.030520 | orchestrator | Friday 19 September 2025 11:25:49 +0000 (0:00:00.153) 0:00:10.930 ****** 2025-09-19 11:25:53.030530 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:53.030541 | orchestrator | 2025-09-19 11:25:53.030561 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 11:25:53.030572 | orchestrator | Friday 19 September 2025 11:25:49 +0000 (0:00:00.233) 0:00:11.163 ****** 2025-09-19 11:25:53.030583 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:53.030594 | orchestrator | 2025-09-19 11:25:53.030605 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 11:25:53.030616 | orchestrator | Friday 19 September 2025 11:25:49 +0000 (0:00:00.136) 0:00:11.299 ****** 2025-09-19 11:25:53.030627 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 11:25:53.030638 | orchestrator |  "ceph_osd_devices": { 2025-09-19 11:25:53.030650 | orchestrator |  "sdb": { 2025-09-19 11:25:53.030661 | orchestrator |  "osd_lvm_uuid": "c75d7215-6866-5647-89df-878c4666c32d" 2025-09-19 11:25:53.030738 | orchestrator |  }, 2025-09-19 11:25:53.030751 | orchestrator |  "sdc": { 2025-09-19 11:25:53.030762 | orchestrator |  "osd_lvm_uuid": "b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0" 2025-09-19 11:25:53.030774 | orchestrator |  } 2025-09-19 11:25:53.030785 | orchestrator |  } 2025-09-19 11:25:53.030796 | orchestrator | } 2025-09-19 11:25:53.030807 | orchestrator | 2025-09-19 11:25:53.030818 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 11:25:53.030829 | orchestrator | Friday 19 September 2025 11:25:49 +0000 (0:00:00.149) 0:00:11.448 ****** 2025-09-19 11:25:53.030840 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:53.030851 | orchestrator | 2025-09-19 11:25:53.030862 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 11:25:53.030873 | orchestrator | Friday 19 September 2025 11:25:49 +0000 (0:00:00.124) 0:00:11.573 ****** 2025-09-19 11:25:53.030891 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:53.030902 | orchestrator | 2025-09-19 11:25:53.030913 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 11:25:53.030924 | orchestrator | Friday 19 September 2025 11:25:49 +0000 (0:00:00.131) 0:00:11.705 ****** 2025-09-19 11:25:53.030935 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:25:53.030945 | orchestrator | 2025-09-19 11:25:53.030956 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 11:25:53.030967 | orchestrator | Friday 19 September 2025 11:25:50 +0000 (0:00:00.135) 0:00:11.840 ****** 2025-09-19 11:25:53.030978 | orchestrator | changed: [testbed-node-3] => { 2025-09-19 11:25:53.030989 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 11:25:53.031000 | orchestrator |  "ceph_osd_devices": { 2025-09-19 11:25:53.031011 | orchestrator |  "sdb": { 2025-09-19 11:25:53.031022 | orchestrator |  "osd_lvm_uuid": "c75d7215-6866-5647-89df-878c4666c32d" 2025-09-19 11:25:53.031033 | orchestrator |  }, 2025-09-19 11:25:53.031044 | orchestrator |  "sdc": { 2025-09-19 11:25:53.031055 | orchestrator |  "osd_lvm_uuid": "b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0" 2025-09-19 11:25:53.031066 | orchestrator |  } 2025-09-19 11:25:53.031077 | orchestrator |  }, 2025-09-19 11:25:53.031088 | orchestrator |  "lvm_volumes": [ 2025-09-19 11:25:53.031099 | orchestrator |  { 2025-09-19 11:25:53.031110 | orchestrator |  "data": "osd-block-c75d7215-6866-5647-89df-878c4666c32d", 2025-09-19 11:25:53.031121 | orchestrator |  "data_vg": "ceph-c75d7215-6866-5647-89df-878c4666c32d" 2025-09-19 11:25:53.031131 | orchestrator |  }, 2025-09-19 11:25:53.031141 | orchestrator |  { 2025-09-19 11:25:53.031151 | orchestrator |  "data": "osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0", 2025-09-19 11:25:53.031160 | orchestrator |  "data_vg": "ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0" 2025-09-19 11:25:53.031170 | orchestrator |  } 2025-09-19 11:25:53.031180 | orchestrator |  ] 2025-09-19 11:25:53.031189 | orchestrator |  } 2025-09-19 11:25:53.031199 | orchestrator | } 2025-09-19 11:25:53.031209 | orchestrator | 2025-09-19 11:25:53.031219 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 11:25:53.031235 | orchestrator | Friday 19 September 2025 11:25:50 +0000 (0:00:00.209) 0:00:12.049 ****** 2025-09-19 11:25:53.031245 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 11:25:53.031255 | orchestrator | 2025-09-19 11:25:53.031265 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 11:25:53.031274 | orchestrator | 2025-09-19 11:25:53.031284 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 11:25:53.031294 | orchestrator | Friday 19 September 2025 11:25:52 +0000 (0:00:02.231) 0:00:14.281 ****** 2025-09-19 11:25:53.031303 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 11:25:53.031313 | orchestrator | 2025-09-19 11:25:53.031323 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 11:25:53.031332 | orchestrator | Friday 19 September 2025 11:25:52 +0000 (0:00:00.252) 0:00:14.533 ****** 2025-09-19 11:25:53.031342 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:25:53.031352 | orchestrator | 2025-09-19 11:25:53.031362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:25:53.031379 | orchestrator | Friday 19 September 2025 11:25:53 +0000 (0:00:00.243) 0:00:14.776 ****** 2025-09-19 11:26:00.465778 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-19 11:26:00.465867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-19 11:26:00.465882 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-19 11:26:00.465894 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-19 11:26:00.465905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-19 11:26:00.465946 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-19 11:26:00.465961 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-19 11:26:00.465972 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-19 11:26:00.465983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-19 11:26:00.465994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-19 11:26:00.466069 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-19 11:26:00.466084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-19 11:26:00.466095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-19 11:26:00.466110 | orchestrator | 2025-09-19 11:26:00.466122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:00.466134 | orchestrator | Friday 19 September 2025 11:25:53 +0000 (0:00:00.402) 0:00:15.178 ****** 2025-09-19 11:26:00.466145 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:00.466157 | orchestrator | 2025-09-19 11:26:00.466169 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:00.466180 | orchestrator | Friday 19 September 2025 11:25:53 +0000 (0:00:00.207) 0:00:15.385 ****** 2025-09-19 11:26:00.466191 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:00.466232 | orchestrator | 2025-09-19 11:26:00.466245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:00.466256 | orchestrator | Friday 19 September 2025 11:25:53 +0000 (0:00:00.199) 0:00:15.586 ****** 2025-09-19 11:26:00.466267 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:00.466280 | orchestrator | 2025-09-19 11:26:00.466293 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:00.466306 | orchestrator | Friday 19 September 2025 11:25:54 +0000 (0:00:00.204) 0:00:15.790 ****** 2025-09-19 11:26:00.466319 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:00.466347 | orchestrator | 2025-09-19 11:26:00.466360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:00.466373 | orchestrator | Friday 19 September 2025 11:25:54 +0000 (0:00:00.204) 0:00:15.995 ****** 2025-09-19 11:26:00.466386 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:00.466398 | orchestrator | 2025-09-19 11:26:00.466410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:00.466423 | orchestrator | Friday 19 September 2025 11:25:54 +0000 (0:00:00.602) 0:00:16.598 ****** 2025-09-19 11:26:00.466436 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:00.466449 | orchestrator | 2025-09-19 11:26:00.466461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:00.466474 | orchestrator | Friday 19 September 2025 11:25:55 +0000 (0:00:00.220) 0:00:16.818 ****** 2025-09-19 11:26:00.466486 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:00.466498 | orchestrator | 2025-09-19 11:26:00.466510 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:00.466522 | orchestrator | Friday 19 September 2025 11:25:55 +0000 (0:00:00.245) 0:00:17.064 ****** 2025-09-19 11:26:00.466535 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:00.466547 | orchestrator | 2025-09-19 11:26:00.466560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:00.466572 | orchestrator | Friday 19 September 2025 11:25:55 +0000 (0:00:00.205) 0:00:17.269 ****** 2025-09-19 11:26:00.466585 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03) 2025-09-19 11:26:00.466599 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03) 2025-09-19 11:26:00.466611 | orchestrator | 2025-09-19 11:26:00.466625 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:00.466636 | orchestrator | Friday 19 September 2025 11:25:55 +0000 (0:00:00.474) 0:00:17.744 ****** 2025-09-19 11:26:00.466647 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b4727c68-ff73-4ff9-aa8c-694157ecb2dd) 2025-09-19 11:26:00.466658 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b4727c68-ff73-4ff9-aa8c-694157ecb2dd) 2025-09-19 11:26:00.466721 | orchestrator | 2025-09-19 11:26:00.466733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:00.466744 | orchestrator | Friday 19 September 2025 11:25:56 +0000 (0:00:00.410) 0:00:18.155 ****** 2025-09-19 11:26:00.466755 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_39dbe9ae-8bf0-4e12-9ca8-c59aebdbd1f7) 2025-09-19 11:26:00.466766 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_39dbe9ae-8bf0-4e12-9ca8-c59aebdbd1f7) 2025-09-19 11:26:00.466777 | orchestrator | 2025-09-19 11:26:00.466788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:00.466799 | orchestrator | Friday 19 September 2025 11:25:56 +0000 (0:00:00.418) 0:00:18.573 ****** 2025-09-19 11:26:00.466827 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3322ab10-28f2-47f3-9821-bfcea3cb9d1d) 2025-09-19 11:26:00.466839 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3322ab10-28f2-47f3-9821-bfcea3cb9d1d) 2025-09-19 11:26:00.466850 | orchestrator | 2025-09-19 11:26:00.466861 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:00.466872 | orchestrator | Friday 19 September 2025 11:25:57 +0000 (0:00:00.433) 0:00:19.007 ****** 2025-09-19 11:26:00.466884 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 11:26:00.466895 | orchestrator | 2025-09-19 11:26:00.466906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:00.466924 | orchestrator | Friday 19 September 2025 11:25:57 +0000 (0:00:00.316) 0:00:19.324 ****** 2025-09-19 11:26:00.466935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-19 11:26:00.466954 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-19 11:26:00.466965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-19 11:26:00.466976 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-19 11:26:00.466987 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-19 11:26:00.466998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-19 11:26:00.467008 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-19 11:26:00.467019 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-19 11:26:00.467030 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-19 11:26:00.467041 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-19 11:26:00.467051 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-19 11:26:00.467062 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-19 11:26:00.467073 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-19 11:26:00.467084 | orchestrator | 2025-09-19 11:26:00.467094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:00.467105 | orchestrator | Friday 19 September 2025 11:25:57 +0000 (0:00:00.354) 0:00:19.678 ****** 2025-09-19 11:26:00.467116 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:00.467127 | orchestrator | 2025-09-19 11:26:00.467138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:00.467149 | orchestrator | Friday 19 September 2025 11:25:58 +0000 (0:00:00.168) 0:00:19.847 ****** 2025-09-19 11:26:00.467160 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:00.467171 | orchestrator | 2025-09-19 11:26:00.467182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:00.467193 | orchestrator | Friday 19 September 2025 11:25:58 +0000 (0:00:00.493) 0:00:20.340 ****** 2025-09-19 11:26:00.467203 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:00.467214 | orchestrator | 2025-09-19 11:26:00.467225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:00.467236 | orchestrator | Friday 19 September 2025 11:25:58 +0000 (0:00:00.179) 0:00:20.519 ****** 2025-09-19 11:26:00.467247 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:00.467258 | orchestrator | 2025-09-19 11:26:00.467269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:00.467280 | orchestrator | Friday 19 September 2025 11:25:58 +0000 (0:00:00.174) 0:00:20.694 ****** 2025-09-19 11:26:00.467291 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:00.467302 | orchestrator | 2025-09-19 11:26:00.467313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:00.467324 | orchestrator | Friday 19 September 2025 11:25:59 +0000 (0:00:00.187) 0:00:20.881 ****** 2025-09-19 11:26:00.467335 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:00.467346 | orchestrator | 2025-09-19 11:26:00.467357 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:00.467368 | orchestrator | Friday 19 September 2025 11:25:59 +0000 (0:00:00.185) 0:00:21.066 ****** 2025-09-19 11:26:00.467379 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:00.467389 | orchestrator | 2025-09-19 11:26:00.467400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:00.467411 | orchestrator | Friday 19 September 2025 11:25:59 +0000 (0:00:00.186) 0:00:21.253 ****** 2025-09-19 11:26:00.467422 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:00.467433 | orchestrator | 2025-09-19 11:26:00.467444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:00.467462 | orchestrator | Friday 19 September 2025 11:25:59 +0000 (0:00:00.184) 0:00:21.438 ****** 2025-09-19 11:26:00.467473 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-19 11:26:00.467484 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-19 11:26:00.467495 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-19 11:26:00.467507 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-19 11:26:00.467517 | orchestrator | 2025-09-19 11:26:00.467528 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:00.467539 | orchestrator | Friday 19 September 2025 11:26:00 +0000 (0:00:00.594) 0:00:22.032 ****** 2025-09-19 11:26:00.467550 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:00.467597 | orchestrator | 2025-09-19 11:26:00.467615 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:05.897373 | orchestrator | Friday 19 September 2025 11:26:00 +0000 (0:00:00.177) 0:00:22.209 ****** 2025-09-19 11:26:05.897489 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:05.897514 | orchestrator | 2025-09-19 11:26:05.897534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:05.897552 | orchestrator | Friday 19 September 2025 11:26:00 +0000 (0:00:00.195) 0:00:22.405 ****** 2025-09-19 11:26:05.897570 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:05.897588 | orchestrator | 2025-09-19 11:26:05.897607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:05.897625 | orchestrator | Friday 19 September 2025 11:26:00 +0000 (0:00:00.176) 0:00:22.582 ****** 2025-09-19 11:26:05.897643 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:05.897714 | orchestrator | 2025-09-19 11:26:05.897754 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 11:26:05.897776 | orchestrator | Friday 19 September 2025 11:26:01 +0000 (0:00:00.182) 0:00:22.765 ****** 2025-09-19 11:26:05.897795 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-19 11:26:05.897814 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-19 11:26:05.897833 | orchestrator | 2025-09-19 11:26:05.897853 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 11:26:05.897872 | orchestrator | Friday 19 September 2025 11:26:01 +0000 (0:00:00.318) 0:00:23.084 ****** 2025-09-19 11:26:05.897891 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:05.897912 | orchestrator | 2025-09-19 11:26:05.897934 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 11:26:05.897955 | orchestrator | Friday 19 September 2025 11:26:01 +0000 (0:00:00.115) 0:00:23.199 ****** 2025-09-19 11:26:05.897976 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:05.897996 | orchestrator | 2025-09-19 11:26:05.898075 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 11:26:05.898098 | orchestrator | Friday 19 September 2025 11:26:01 +0000 (0:00:00.142) 0:00:23.341 ****** 2025-09-19 11:26:05.898120 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:05.898142 | orchestrator | 2025-09-19 11:26:05.898163 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 11:26:05.898183 | orchestrator | Friday 19 September 2025 11:26:01 +0000 (0:00:00.113) 0:00:23.455 ****** 2025-09-19 11:26:05.898203 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:26:05.898221 | orchestrator | 2025-09-19 11:26:05.898240 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 11:26:05.898259 | orchestrator | Friday 19 September 2025 11:26:01 +0000 (0:00:00.142) 0:00:23.598 ****** 2025-09-19 11:26:05.898279 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ac676d1d-4f4c-546f-a12f-f85171bcd1d7'}}) 2025-09-19 11:26:05.898300 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ffd16df6-6207-59ff-a831-a7eb6df6d5c2'}}) 2025-09-19 11:26:05.898319 | orchestrator | 2025-09-19 11:26:05.898337 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 11:26:05.898383 | orchestrator | Friday 19 September 2025 11:26:02 +0000 (0:00:00.171) 0:00:23.769 ****** 2025-09-19 11:26:05.898404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ac676d1d-4f4c-546f-a12f-f85171bcd1d7'}})  2025-09-19 11:26:05.898424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ffd16df6-6207-59ff-a831-a7eb6df6d5c2'}})  2025-09-19 11:26:05.898444 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:05.898462 | orchestrator | 2025-09-19 11:26:05.898479 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 11:26:05.898491 | orchestrator | Friday 19 September 2025 11:26:02 +0000 (0:00:00.142) 0:00:23.912 ****** 2025-09-19 11:26:05.898502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ac676d1d-4f4c-546f-a12f-f85171bcd1d7'}})  2025-09-19 11:26:05.898513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ffd16df6-6207-59ff-a831-a7eb6df6d5c2'}})  2025-09-19 11:26:05.898524 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:05.898534 | orchestrator | 2025-09-19 11:26:05.898545 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 11:26:05.898556 | orchestrator | Friday 19 September 2025 11:26:02 +0000 (0:00:00.133) 0:00:24.046 ****** 2025-09-19 11:26:05.898567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ac676d1d-4f4c-546f-a12f-f85171bcd1d7'}})  2025-09-19 11:26:05.898577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ffd16df6-6207-59ff-a831-a7eb6df6d5c2'}})  2025-09-19 11:26:05.898589 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:05.898600 | orchestrator | 2025-09-19 11:26:05.898611 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 11:26:05.898621 | orchestrator | Friday 19 September 2025 11:26:02 +0000 (0:00:00.171) 0:00:24.218 ****** 2025-09-19 11:26:05.898632 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:26:05.898643 | orchestrator | 2025-09-19 11:26:05.898654 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 11:26:05.898713 | orchestrator | Friday 19 September 2025 11:26:02 +0000 (0:00:00.122) 0:00:24.340 ****** 2025-09-19 11:26:05.898727 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:26:05.898738 | orchestrator | 2025-09-19 11:26:05.898748 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 11:26:05.898759 | orchestrator | Friday 19 September 2025 11:26:02 +0000 (0:00:00.121) 0:00:24.462 ****** 2025-09-19 11:26:05.898770 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:05.898781 | orchestrator | 2025-09-19 11:26:05.898810 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 11:26:05.898820 | orchestrator | Friday 19 September 2025 11:26:02 +0000 (0:00:00.121) 0:00:24.583 ****** 2025-09-19 11:26:05.898830 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:05.898840 | orchestrator | 2025-09-19 11:26:05.898849 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 11:26:05.898859 | orchestrator | Friday 19 September 2025 11:26:03 +0000 (0:00:00.244) 0:00:24.828 ****** 2025-09-19 11:26:05.898868 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:05.898878 | orchestrator | 2025-09-19 11:26:05.898887 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 11:26:05.898897 | orchestrator | Friday 19 September 2025 11:26:03 +0000 (0:00:00.117) 0:00:24.945 ****** 2025-09-19 11:26:05.898906 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 11:26:05.898916 | orchestrator |  "ceph_osd_devices": { 2025-09-19 11:26:05.898931 | orchestrator |  "sdb": { 2025-09-19 11:26:05.898948 | orchestrator |  "osd_lvm_uuid": "ac676d1d-4f4c-546f-a12f-f85171bcd1d7" 2025-09-19 11:26:05.898965 | orchestrator |  }, 2025-09-19 11:26:05.898981 | orchestrator |  "sdc": { 2025-09-19 11:26:05.899010 | orchestrator |  "osd_lvm_uuid": "ffd16df6-6207-59ff-a831-a7eb6df6d5c2" 2025-09-19 11:26:05.899027 | orchestrator |  } 2025-09-19 11:26:05.899043 | orchestrator |  } 2025-09-19 11:26:05.899060 | orchestrator | } 2025-09-19 11:26:05.899078 | orchestrator | 2025-09-19 11:26:05.899096 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 11:26:05.899112 | orchestrator | Friday 19 September 2025 11:26:03 +0000 (0:00:00.112) 0:00:25.058 ****** 2025-09-19 11:26:05.899129 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:05.899145 | orchestrator | 2025-09-19 11:26:05.899172 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 11:26:05.899190 | orchestrator | Friday 19 September 2025 11:26:03 +0000 (0:00:00.112) 0:00:25.171 ****** 2025-09-19 11:26:05.899207 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:05.899224 | orchestrator | 2025-09-19 11:26:05.899240 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 11:26:05.899256 | orchestrator | Friday 19 September 2025 11:26:03 +0000 (0:00:00.109) 0:00:25.280 ****** 2025-09-19 11:26:05.899272 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:26:05.899288 | orchestrator | 2025-09-19 11:26:05.899304 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 11:26:05.899319 | orchestrator | Friday 19 September 2025 11:26:03 +0000 (0:00:00.110) 0:00:25.391 ****** 2025-09-19 11:26:05.899336 | orchestrator | changed: [testbed-node-4] => { 2025-09-19 11:26:05.899352 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 11:26:05.899368 | orchestrator |  "ceph_osd_devices": { 2025-09-19 11:26:05.899384 | orchestrator |  "sdb": { 2025-09-19 11:26:05.899400 | orchestrator |  "osd_lvm_uuid": "ac676d1d-4f4c-546f-a12f-f85171bcd1d7" 2025-09-19 11:26:05.899425 | orchestrator |  }, 2025-09-19 11:26:05.899440 | orchestrator |  "sdc": { 2025-09-19 11:26:05.899455 | orchestrator |  "osd_lvm_uuid": "ffd16df6-6207-59ff-a831-a7eb6df6d5c2" 2025-09-19 11:26:05.899471 | orchestrator |  } 2025-09-19 11:26:05.899486 | orchestrator |  }, 2025-09-19 11:26:05.899502 | orchestrator |  "lvm_volumes": [ 2025-09-19 11:26:05.899517 | orchestrator |  { 2025-09-19 11:26:05.899532 | orchestrator |  "data": "osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7", 2025-09-19 11:26:05.899547 | orchestrator |  "data_vg": "ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7" 2025-09-19 11:26:05.899563 | orchestrator |  }, 2025-09-19 11:26:05.899578 | orchestrator |  { 2025-09-19 11:26:05.899595 | orchestrator |  "data": "osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2", 2025-09-19 11:26:05.899610 | orchestrator |  "data_vg": "ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2" 2025-09-19 11:26:05.899625 | orchestrator |  } 2025-09-19 11:26:05.899640 | orchestrator |  ] 2025-09-19 11:26:05.899677 | orchestrator |  } 2025-09-19 11:26:05.899696 | orchestrator | } 2025-09-19 11:26:05.899713 | orchestrator | 2025-09-19 11:26:05.899730 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 11:26:05.899747 | orchestrator | Friday 19 September 2025 11:26:03 +0000 (0:00:00.175) 0:00:25.566 ****** 2025-09-19 11:26:05.899763 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 11:26:05.899780 | orchestrator | 2025-09-19 11:26:05.899795 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 11:26:05.899811 | orchestrator | 2025-09-19 11:26:05.899828 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 11:26:05.899844 | orchestrator | Friday 19 September 2025 11:26:04 +0000 (0:00:00.877) 0:00:26.444 ****** 2025-09-19 11:26:05.899861 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 11:26:05.899877 | orchestrator | 2025-09-19 11:26:05.899893 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 11:26:05.899911 | orchestrator | Friday 19 September 2025 11:26:05 +0000 (0:00:00.384) 0:00:26.828 ****** 2025-09-19 11:26:05.899942 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:26:05.899960 | orchestrator | 2025-09-19 11:26:05.899977 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:05.899995 | orchestrator | Friday 19 September 2025 11:26:05 +0000 (0:00:00.452) 0:00:27.280 ****** 2025-09-19 11:26:05.900011 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-19 11:26:05.900021 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-19 11:26:05.900031 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-19 11:26:05.900040 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-19 11:26:05.900050 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-19 11:26:05.900059 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-19 11:26:05.900080 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-19 11:26:13.185229 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-19 11:26:13.185299 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-19 11:26:13.185307 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-19 11:26:13.185315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-19 11:26:13.185321 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-19 11:26:13.185327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-19 11:26:13.185334 | orchestrator | 2025-09-19 11:26:13.185341 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:13.185347 | orchestrator | Friday 19 September 2025 11:26:05 +0000 (0:00:00.360) 0:00:27.640 ****** 2025-09-19 11:26:13.185354 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.185361 | orchestrator | 2025-09-19 11:26:13.185367 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:13.185373 | orchestrator | Friday 19 September 2025 11:26:06 +0000 (0:00:00.180) 0:00:27.821 ****** 2025-09-19 11:26:13.185380 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.185386 | orchestrator | 2025-09-19 11:26:13.185392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:13.185398 | orchestrator | Friday 19 September 2025 11:26:06 +0000 (0:00:00.171) 0:00:27.992 ****** 2025-09-19 11:26:13.185404 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.185410 | orchestrator | 2025-09-19 11:26:13.185417 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:13.185423 | orchestrator | Friday 19 September 2025 11:26:06 +0000 (0:00:00.173) 0:00:28.166 ****** 2025-09-19 11:26:13.185429 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.185435 | orchestrator | 2025-09-19 11:26:13.185441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:13.185447 | orchestrator | Friday 19 September 2025 11:26:06 +0000 (0:00:00.175) 0:00:28.342 ****** 2025-09-19 11:26:13.185453 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.185459 | orchestrator | 2025-09-19 11:26:13.185466 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:13.185472 | orchestrator | Friday 19 September 2025 11:26:06 +0000 (0:00:00.173) 0:00:28.515 ****** 2025-09-19 11:26:13.185478 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.185484 | orchestrator | 2025-09-19 11:26:13.185490 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:13.185496 | orchestrator | Friday 19 September 2025 11:26:06 +0000 (0:00:00.178) 0:00:28.694 ****** 2025-09-19 11:26:13.185503 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.185522 | orchestrator | 2025-09-19 11:26:13.185529 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:13.185535 | orchestrator | Friday 19 September 2025 11:26:07 +0000 (0:00:00.147) 0:00:28.842 ****** 2025-09-19 11:26:13.185541 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.185547 | orchestrator | 2025-09-19 11:26:13.185564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:13.185571 | orchestrator | Friday 19 September 2025 11:26:07 +0000 (0:00:00.156) 0:00:28.998 ****** 2025-09-19 11:26:13.185577 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba) 2025-09-19 11:26:13.185584 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba) 2025-09-19 11:26:13.185590 | orchestrator | 2025-09-19 11:26:13.185596 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:13.185602 | orchestrator | Friday 19 September 2025 11:26:07 +0000 (0:00:00.555) 0:00:29.553 ****** 2025-09-19 11:26:13.185608 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_14764732-c430-42d5-be90-4134a981fa59) 2025-09-19 11:26:13.185614 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_14764732-c430-42d5-be90-4134a981fa59) 2025-09-19 11:26:13.185621 | orchestrator | 2025-09-19 11:26:13.185627 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:13.185633 | orchestrator | Friday 19 September 2025 11:26:08 +0000 (0:00:00.651) 0:00:30.205 ****** 2025-09-19 11:26:13.185638 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_02d4d70c-9632-40cc-9453-c0d53d6148ed) 2025-09-19 11:26:13.185645 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_02d4d70c-9632-40cc-9453-c0d53d6148ed) 2025-09-19 11:26:13.185695 | orchestrator | 2025-09-19 11:26:13.185702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:13.185708 | orchestrator | Friday 19 September 2025 11:26:08 +0000 (0:00:00.438) 0:00:30.644 ****** 2025-09-19 11:26:13.185714 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_29dd875d-2efb-4f11-ac43-6353645f7e36) 2025-09-19 11:26:13.185720 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_29dd875d-2efb-4f11-ac43-6353645f7e36) 2025-09-19 11:26:13.185727 | orchestrator | 2025-09-19 11:26:13.185733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:26:13.185739 | orchestrator | Friday 19 September 2025 11:26:09 +0000 (0:00:00.350) 0:00:30.994 ****** 2025-09-19 11:26:13.185745 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 11:26:13.185751 | orchestrator | 2025-09-19 11:26:13.185757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:13.185763 | orchestrator | Friday 19 September 2025 11:26:09 +0000 (0:00:00.291) 0:00:31.285 ****** 2025-09-19 11:26:13.185780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-19 11:26:13.185787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-19 11:26:13.185793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-19 11:26:13.185799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-19 11:26:13.185805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-19 11:26:13.185812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-19 11:26:13.185819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-19 11:26:13.185826 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-19 11:26:13.185833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-19 11:26:13.185847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-19 11:26:13.185854 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-19 11:26:13.185861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-19 11:26:13.185868 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-19 11:26:13.185875 | orchestrator | 2025-09-19 11:26:13.185882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:13.185888 | orchestrator | Friday 19 September 2025 11:26:09 +0000 (0:00:00.341) 0:00:31.627 ****** 2025-09-19 11:26:13.185895 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.185902 | orchestrator | 2025-09-19 11:26:13.185910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:13.185917 | orchestrator | Friday 19 September 2025 11:26:10 +0000 (0:00:00.191) 0:00:31.818 ****** 2025-09-19 11:26:13.185923 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.185930 | orchestrator | 2025-09-19 11:26:13.185937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:13.185944 | orchestrator | Friday 19 September 2025 11:26:10 +0000 (0:00:00.203) 0:00:32.022 ****** 2025-09-19 11:26:13.185951 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.185958 | orchestrator | 2025-09-19 11:26:13.185965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:13.185972 | orchestrator | Friday 19 September 2025 11:26:10 +0000 (0:00:00.179) 0:00:32.201 ****** 2025-09-19 11:26:13.185979 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.185986 | orchestrator | 2025-09-19 11:26:13.185993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:13.186000 | orchestrator | Friday 19 September 2025 11:26:10 +0000 (0:00:00.188) 0:00:32.390 ****** 2025-09-19 11:26:13.186006 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.186013 | orchestrator | 2025-09-19 11:26:13.186054 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:13.186061 | orchestrator | Friday 19 September 2025 11:26:10 +0000 (0:00:00.182) 0:00:32.573 ****** 2025-09-19 11:26:13.186069 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.186075 | orchestrator | 2025-09-19 11:26:13.186082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:13.186089 | orchestrator | Friday 19 September 2025 11:26:11 +0000 (0:00:00.524) 0:00:33.097 ****** 2025-09-19 11:26:13.186096 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.186103 | orchestrator | 2025-09-19 11:26:13.186110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:13.186117 | orchestrator | Friday 19 September 2025 11:26:11 +0000 (0:00:00.187) 0:00:33.284 ****** 2025-09-19 11:26:13.186124 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.186131 | orchestrator | 2025-09-19 11:26:13.186138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:13.186145 | orchestrator | Friday 19 September 2025 11:26:11 +0000 (0:00:00.185) 0:00:33.470 ****** 2025-09-19 11:26:13.186152 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-19 11:26:13.186159 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-19 11:26:13.186166 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-19 11:26:13.186173 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-19 11:26:13.186179 | orchestrator | 2025-09-19 11:26:13.186185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:13.186192 | orchestrator | Friday 19 September 2025 11:26:12 +0000 (0:00:00.610) 0:00:34.081 ****** 2025-09-19 11:26:13.186198 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.186204 | orchestrator | 2025-09-19 11:26:13.186210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:13.186222 | orchestrator | Friday 19 September 2025 11:26:12 +0000 (0:00:00.196) 0:00:34.278 ****** 2025-09-19 11:26:13.186229 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.186235 | orchestrator | 2025-09-19 11:26:13.186241 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:13.186247 | orchestrator | Friday 19 September 2025 11:26:12 +0000 (0:00:00.210) 0:00:34.489 ****** 2025-09-19 11:26:13.186253 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.186259 | orchestrator | 2025-09-19 11:26:13.186265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:26:13.186272 | orchestrator | Friday 19 September 2025 11:26:12 +0000 (0:00:00.226) 0:00:34.715 ****** 2025-09-19 11:26:13.186281 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:13.186288 | orchestrator | 2025-09-19 11:26:13.186294 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 11:26:13.186304 | orchestrator | Friday 19 September 2025 11:26:13 +0000 (0:00:00.217) 0:00:34.932 ****** 2025-09-19 11:26:16.886284 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-19 11:26:16.886383 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-19 11:26:16.886398 | orchestrator | 2025-09-19 11:26:16.886411 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 11:26:16.886422 | orchestrator | Friday 19 September 2025 11:26:13 +0000 (0:00:00.147) 0:00:35.080 ****** 2025-09-19 11:26:16.886434 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:16.886445 | orchestrator | 2025-09-19 11:26:16.886456 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 11:26:16.886467 | orchestrator | Friday 19 September 2025 11:26:13 +0000 (0:00:00.115) 0:00:35.196 ****** 2025-09-19 11:26:16.886478 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:16.886489 | orchestrator | 2025-09-19 11:26:16.886500 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 11:26:16.886510 | orchestrator | Friday 19 September 2025 11:26:13 +0000 (0:00:00.113) 0:00:35.309 ****** 2025-09-19 11:26:16.886525 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:16.886543 | orchestrator | 2025-09-19 11:26:16.886560 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 11:26:16.886578 | orchestrator | Friday 19 September 2025 11:26:13 +0000 (0:00:00.133) 0:00:35.443 ****** 2025-09-19 11:26:16.886595 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:26:16.886613 | orchestrator | 2025-09-19 11:26:16.886631 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 11:26:16.886681 | orchestrator | Friday 19 September 2025 11:26:13 +0000 (0:00:00.271) 0:00:35.714 ****** 2025-09-19 11:26:16.886704 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9d0af248-3195-52cb-bed6-977ad9e4ee39'}}) 2025-09-19 11:26:16.886724 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6e702043-5e82-5f33-ad25-d539496f9fd1'}}) 2025-09-19 11:26:16.886743 | orchestrator | 2025-09-19 11:26:16.886763 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 11:26:16.886780 | orchestrator | Friday 19 September 2025 11:26:14 +0000 (0:00:00.175) 0:00:35.890 ****** 2025-09-19 11:26:16.886800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9d0af248-3195-52cb-bed6-977ad9e4ee39'}})  2025-09-19 11:26:16.886813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6e702043-5e82-5f33-ad25-d539496f9fd1'}})  2025-09-19 11:26:16.886826 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:16.886837 | orchestrator | 2025-09-19 11:26:16.886866 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 11:26:16.886879 | orchestrator | Friday 19 September 2025 11:26:14 +0000 (0:00:00.146) 0:00:36.036 ****** 2025-09-19 11:26:16.886891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9d0af248-3195-52cb-bed6-977ad9e4ee39'}})  2025-09-19 11:26:16.886925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6e702043-5e82-5f33-ad25-d539496f9fd1'}})  2025-09-19 11:26:16.886938 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:16.886950 | orchestrator | 2025-09-19 11:26:16.886962 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 11:26:16.886975 | orchestrator | Friday 19 September 2025 11:26:14 +0000 (0:00:00.140) 0:00:36.177 ****** 2025-09-19 11:26:16.886987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9d0af248-3195-52cb-bed6-977ad9e4ee39'}})  2025-09-19 11:26:16.887001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6e702043-5e82-5f33-ad25-d539496f9fd1'}})  2025-09-19 11:26:16.887013 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:16.887025 | orchestrator | 2025-09-19 11:26:16.887037 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 11:26:16.887050 | orchestrator | Friday 19 September 2025 11:26:14 +0000 (0:00:00.127) 0:00:36.304 ****** 2025-09-19 11:26:16.887062 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:26:16.887073 | orchestrator | 2025-09-19 11:26:16.887084 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 11:26:16.887095 | orchestrator | Friday 19 September 2025 11:26:14 +0000 (0:00:00.130) 0:00:36.435 ****** 2025-09-19 11:26:16.887105 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:26:16.887116 | orchestrator | 2025-09-19 11:26:16.887127 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 11:26:16.887138 | orchestrator | Friday 19 September 2025 11:26:14 +0000 (0:00:00.122) 0:00:36.557 ****** 2025-09-19 11:26:16.887148 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:16.887159 | orchestrator | 2025-09-19 11:26:16.887169 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 11:26:16.887180 | orchestrator | Friday 19 September 2025 11:26:14 +0000 (0:00:00.117) 0:00:36.675 ****** 2025-09-19 11:26:16.887190 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:16.887201 | orchestrator | 2025-09-19 11:26:16.887212 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 11:26:16.887223 | orchestrator | Friday 19 September 2025 11:26:15 +0000 (0:00:00.111) 0:00:36.787 ****** 2025-09-19 11:26:16.887234 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:16.887244 | orchestrator | 2025-09-19 11:26:16.887255 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 11:26:16.887266 | orchestrator | Friday 19 September 2025 11:26:15 +0000 (0:00:00.139) 0:00:36.926 ****** 2025-09-19 11:26:16.887277 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 11:26:16.887288 | orchestrator |  "ceph_osd_devices": { 2025-09-19 11:26:16.887299 | orchestrator |  "sdb": { 2025-09-19 11:26:16.887310 | orchestrator |  "osd_lvm_uuid": "9d0af248-3195-52cb-bed6-977ad9e4ee39" 2025-09-19 11:26:16.887341 | orchestrator |  }, 2025-09-19 11:26:16.887353 | orchestrator |  "sdc": { 2025-09-19 11:26:16.887364 | orchestrator |  "osd_lvm_uuid": "6e702043-5e82-5f33-ad25-d539496f9fd1" 2025-09-19 11:26:16.887375 | orchestrator |  } 2025-09-19 11:26:16.887386 | orchestrator |  } 2025-09-19 11:26:16.887397 | orchestrator | } 2025-09-19 11:26:16.887408 | orchestrator | 2025-09-19 11:26:16.887419 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 11:26:16.887430 | orchestrator | Friday 19 September 2025 11:26:15 +0000 (0:00:00.114) 0:00:37.041 ****** 2025-09-19 11:26:16.887441 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:16.887452 | orchestrator | 2025-09-19 11:26:16.887463 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 11:26:16.887474 | orchestrator | Friday 19 September 2025 11:26:15 +0000 (0:00:00.109) 0:00:37.151 ****** 2025-09-19 11:26:16.887484 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:16.887495 | orchestrator | 2025-09-19 11:26:16.887506 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 11:26:16.887525 | orchestrator | Friday 19 September 2025 11:26:15 +0000 (0:00:00.280) 0:00:37.431 ****** 2025-09-19 11:26:16.887536 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:26:16.887546 | orchestrator | 2025-09-19 11:26:16.887558 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 11:26:16.887568 | orchestrator | Friday 19 September 2025 11:26:15 +0000 (0:00:00.111) 0:00:37.543 ****** 2025-09-19 11:26:16.887579 | orchestrator | changed: [testbed-node-5] => { 2025-09-19 11:26:16.887590 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 11:26:16.887601 | orchestrator |  "ceph_osd_devices": { 2025-09-19 11:26:16.887612 | orchestrator |  "sdb": { 2025-09-19 11:26:16.887623 | orchestrator |  "osd_lvm_uuid": "9d0af248-3195-52cb-bed6-977ad9e4ee39" 2025-09-19 11:26:16.887634 | orchestrator |  }, 2025-09-19 11:26:16.887754 | orchestrator |  "sdc": { 2025-09-19 11:26:16.887778 | orchestrator |  "osd_lvm_uuid": "6e702043-5e82-5f33-ad25-d539496f9fd1" 2025-09-19 11:26:16.887789 | orchestrator |  } 2025-09-19 11:26:16.887800 | orchestrator |  }, 2025-09-19 11:26:16.887811 | orchestrator |  "lvm_volumes": [ 2025-09-19 11:26:16.887822 | orchestrator |  { 2025-09-19 11:26:16.887833 | orchestrator |  "data": "osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39", 2025-09-19 11:26:16.887844 | orchestrator |  "data_vg": "ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39" 2025-09-19 11:26:16.887860 | orchestrator |  }, 2025-09-19 11:26:16.887879 | orchestrator |  { 2025-09-19 11:26:16.887896 | orchestrator |  "data": "osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1", 2025-09-19 11:26:16.887924 | orchestrator |  "data_vg": "ceph-6e702043-5e82-5f33-ad25-d539496f9fd1" 2025-09-19 11:26:16.887944 | orchestrator |  } 2025-09-19 11:26:16.887963 | orchestrator |  ] 2025-09-19 11:26:16.887982 | orchestrator |  } 2025-09-19 11:26:16.888006 | orchestrator | } 2025-09-19 11:26:16.888019 | orchestrator | 2025-09-19 11:26:16.888029 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 11:26:16.888040 | orchestrator | Friday 19 September 2025 11:26:15 +0000 (0:00:00.202) 0:00:37.745 ****** 2025-09-19 11:26:16.888051 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 11:26:16.888062 | orchestrator | 2025-09-19 11:26:16.888072 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:26:16.888094 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 11:26:16.888107 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 11:26:16.888118 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 11:26:16.888129 | orchestrator | 2025-09-19 11:26:16.888140 | orchestrator | 2025-09-19 11:26:16.888150 | orchestrator | 2025-09-19 11:26:16.888161 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:26:16.888172 | orchestrator | Friday 19 September 2025 11:26:16 +0000 (0:00:00.873) 0:00:38.619 ****** 2025-09-19 11:26:16.888182 | orchestrator | =============================================================================== 2025-09-19 11:26:16.888193 | orchestrator | Write configuration file ------------------------------------------------ 3.98s 2025-09-19 11:26:16.888204 | orchestrator | Add known partitions to the list of available block devices ------------- 1.06s 2025-09-19 11:26:16.888215 | orchestrator | Add known links to the list of available block devices ------------------ 1.05s 2025-09-19 11:26:16.888225 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2025-09-19 11:26:16.888236 | orchestrator | Get initial list of available block devices ----------------------------- 0.90s 2025-09-19 11:26:16.888262 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.88s 2025-09-19 11:26:16.888281 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.65s 2025-09-19 11:26:16.888298 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-09-19 11:26:16.888315 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.62s 2025-09-19 11:26:16.888331 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-09-19 11:26:16.888347 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-09-19 11:26:16.888366 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2025-09-19 11:26:16.888385 | orchestrator | Set WAL devices config data --------------------------------------------- 0.59s 2025-09-19 11:26:16.888403 | orchestrator | Print configuration data ------------------------------------------------ 0.59s 2025-09-19 11:26:16.888432 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2025-09-19 11:26:17.286379 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2025-09-19 11:26:17.286472 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.55s 2025-09-19 11:26:17.286484 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2025-09-19 11:26:17.286493 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2025-09-19 11:26:17.286501 | orchestrator | Add known partitions to the list of available block devices ------------- 0.52s 2025-09-19 11:26:39.809050 | orchestrator | 2025-09-19 11:26:39 | INFO  | Task db679ab9-79d7-436f-9f18-a53a5defb9b3 (sync inventory) is running in background. Output coming soon. 2025-09-19 11:27:04.364721 | orchestrator | 2025-09-19 11:26:41 | INFO  | Starting group_vars file reorganization 2025-09-19 11:27:04.364818 | orchestrator | 2025-09-19 11:26:41 | INFO  | Moved 0 file(s) to their respective directories 2025-09-19 11:27:04.364834 | orchestrator | 2025-09-19 11:26:41 | INFO  | Group_vars file reorganization completed 2025-09-19 11:27:04.364846 | orchestrator | 2025-09-19 11:26:43 | INFO  | Starting variable preparation from inventory 2025-09-19 11:27:04.364858 | orchestrator | 2025-09-19 11:26:47 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-19 11:27:04.364869 | orchestrator | 2025-09-19 11:26:47 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-19 11:27:04.364880 | orchestrator | 2025-09-19 11:26:47 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-19 11:27:04.364891 | orchestrator | 2025-09-19 11:26:47 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-19 11:27:04.364902 | orchestrator | 2025-09-19 11:26:47 | INFO  | Variable preparation completed 2025-09-19 11:27:04.364913 | orchestrator | 2025-09-19 11:26:48 | INFO  | Starting inventory overwrite handling 2025-09-19 11:27:04.364925 | orchestrator | 2025-09-19 11:26:48 | INFO  | Handling group overwrites in 99-overwrite 2025-09-19 11:27:04.364937 | orchestrator | 2025-09-19 11:26:48 | INFO  | Removing group frr:children from 60-generic 2025-09-19 11:27:04.364948 | orchestrator | 2025-09-19 11:26:48 | INFO  | Removing group storage:children from 50-kolla 2025-09-19 11:27:04.364959 | orchestrator | 2025-09-19 11:26:48 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-19 11:27:04.364970 | orchestrator | 2025-09-19 11:26:48 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-19 11:27:04.364981 | orchestrator | 2025-09-19 11:26:48 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-19 11:27:04.364992 | orchestrator | 2025-09-19 11:26:48 | INFO  | Handling group overwrites in 20-roles 2025-09-19 11:27:04.365003 | orchestrator | 2025-09-19 11:26:48 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-19 11:27:04.365042 | orchestrator | 2025-09-19 11:26:48 | INFO  | Removed 6 group(s) in total 2025-09-19 11:27:04.365055 | orchestrator | 2025-09-19 11:26:48 | INFO  | Inventory overwrite handling completed 2025-09-19 11:27:04.365066 | orchestrator | 2025-09-19 11:26:49 | INFO  | Starting merge of inventory files 2025-09-19 11:27:04.365076 | orchestrator | 2025-09-19 11:26:49 | INFO  | Inventory files merged successfully 2025-09-19 11:27:04.365087 | orchestrator | 2025-09-19 11:26:53 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-19 11:27:04.365099 | orchestrator | 2025-09-19 11:27:03 | INFO  | Successfully wrote ClusterShell configuration 2025-09-19 11:27:04.365110 | orchestrator | [master 3203ffa] 2025-09-19-11-27 2025-09-19 11:27:04.365123 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-19 11:27:06.243140 | orchestrator | 2025-09-19 11:27:06 | INFO  | Task b14ba954-c2b2-4057-b7bc-5f257ee62fdc (ceph-create-lvm-devices) was prepared for execution. 2025-09-19 11:27:06.243851 | orchestrator | 2025-09-19 11:27:06 | INFO  | It takes a moment until task b14ba954-c2b2-4057-b7bc-5f257ee62fdc (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-19 11:27:16.545123 | orchestrator | 2025-09-19 11:27:16.545226 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 11:27:16.545242 | orchestrator | 2025-09-19 11:27:16.545254 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 11:27:16.545266 | orchestrator | Friday 19 September 2025 11:27:09 +0000 (0:00:00.278) 0:00:00.278 ****** 2025-09-19 11:27:16.545278 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 11:27:16.545289 | orchestrator | 2025-09-19 11:27:16.545300 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 11:27:16.545311 | orchestrator | Friday 19 September 2025 11:27:09 +0000 (0:00:00.215) 0:00:00.493 ****** 2025-09-19 11:27:16.545322 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:27:16.545334 | orchestrator | 2025-09-19 11:27:16.545345 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:16.545356 | orchestrator | Friday 19 September 2025 11:27:09 +0000 (0:00:00.200) 0:00:00.694 ****** 2025-09-19 11:27:16.545367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-19 11:27:16.545380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-19 11:27:16.545391 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-19 11:27:16.545402 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-19 11:27:16.545412 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-19 11:27:16.545423 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-19 11:27:16.545434 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-19 11:27:16.545445 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-19 11:27:16.545456 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-19 11:27:16.545467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-19 11:27:16.545478 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-19 11:27:16.545489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-19 11:27:16.545499 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-19 11:27:16.545510 | orchestrator | 2025-09-19 11:27:16.545521 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:16.545556 | orchestrator | Friday 19 September 2025 11:27:10 +0000 (0:00:00.403) 0:00:01.097 ****** 2025-09-19 11:27:16.545600 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:16.545612 | orchestrator | 2025-09-19 11:27:16.545623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:16.545651 | orchestrator | Friday 19 September 2025 11:27:10 +0000 (0:00:00.361) 0:00:01.459 ****** 2025-09-19 11:27:16.545663 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:16.545674 | orchestrator | 2025-09-19 11:27:16.545684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:16.545695 | orchestrator | Friday 19 September 2025 11:27:10 +0000 (0:00:00.183) 0:00:01.642 ****** 2025-09-19 11:27:16.545711 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:16.545722 | orchestrator | 2025-09-19 11:27:16.545732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:16.545743 | orchestrator | Friday 19 September 2025 11:27:10 +0000 (0:00:00.183) 0:00:01.826 ****** 2025-09-19 11:27:16.545754 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:16.545765 | orchestrator | 2025-09-19 11:27:16.545775 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:16.545786 | orchestrator | Friday 19 September 2025 11:27:11 +0000 (0:00:00.184) 0:00:02.010 ****** 2025-09-19 11:27:16.545797 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:16.545808 | orchestrator | 2025-09-19 11:27:16.545819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:16.545829 | orchestrator | Friday 19 September 2025 11:27:11 +0000 (0:00:00.174) 0:00:02.184 ****** 2025-09-19 11:27:16.545840 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:16.545851 | orchestrator | 2025-09-19 11:27:16.545862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:16.545872 | orchestrator | Friday 19 September 2025 11:27:11 +0000 (0:00:00.186) 0:00:02.370 ****** 2025-09-19 11:27:16.545883 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:16.545894 | orchestrator | 2025-09-19 11:27:16.545905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:16.545915 | orchestrator | Friday 19 September 2025 11:27:11 +0000 (0:00:00.193) 0:00:02.564 ****** 2025-09-19 11:27:16.545926 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:16.545937 | orchestrator | 2025-09-19 11:27:16.545947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:16.545958 | orchestrator | Friday 19 September 2025 11:27:11 +0000 (0:00:00.194) 0:00:02.758 ****** 2025-09-19 11:27:16.545969 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0) 2025-09-19 11:27:16.545982 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0) 2025-09-19 11:27:16.545993 | orchestrator | 2025-09-19 11:27:16.546003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:16.546014 | orchestrator | Friday 19 September 2025 11:27:12 +0000 (0:00:00.399) 0:00:03.157 ****** 2025-09-19 11:27:16.546097 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_adddc9ff-e41b-477e-a261-fe5fa77d3a0f) 2025-09-19 11:27:16.546109 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_adddc9ff-e41b-477e-a261-fe5fa77d3a0f) 2025-09-19 11:27:16.546121 | orchestrator | 2025-09-19 11:27:16.546132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:16.546142 | orchestrator | Friday 19 September 2025 11:27:12 +0000 (0:00:00.472) 0:00:03.630 ****** 2025-09-19 11:27:16.546153 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_93b11a5e-f517-4b3c-9813-3ed2f0fa6238) 2025-09-19 11:27:16.546164 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_93b11a5e-f517-4b3c-9813-3ed2f0fa6238) 2025-09-19 11:27:16.546175 | orchestrator | 2025-09-19 11:27:16.546186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:16.546209 | orchestrator | Friday 19 September 2025 11:27:13 +0000 (0:00:00.582) 0:00:04.212 ****** 2025-09-19 11:27:16.546219 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_53ba9bad-d72e-4bb6-9573-8eecfdb7d8b6) 2025-09-19 11:27:16.546230 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_53ba9bad-d72e-4bb6-9573-8eecfdb7d8b6) 2025-09-19 11:27:16.546241 | orchestrator | 2025-09-19 11:27:16.546252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:16.546263 | orchestrator | Friday 19 September 2025 11:27:14 +0000 (0:00:00.766) 0:00:04.979 ****** 2025-09-19 11:27:16.546274 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 11:27:16.546284 | orchestrator | 2025-09-19 11:27:16.546295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:16.546306 | orchestrator | Friday 19 September 2025 11:27:14 +0000 (0:00:00.352) 0:00:05.331 ****** 2025-09-19 11:27:16.546317 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-19 11:27:16.546327 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-19 11:27:16.546338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-19 11:27:16.546348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-19 11:27:16.546359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-19 11:27:16.546370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-19 11:27:16.546380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-19 11:27:16.546391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-19 11:27:16.546402 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-19 11:27:16.546412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-19 11:27:16.546423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-19 11:27:16.546434 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-19 11:27:16.546444 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-19 11:27:16.546455 | orchestrator | 2025-09-19 11:27:16.546466 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:16.546477 | orchestrator | Friday 19 September 2025 11:27:14 +0000 (0:00:00.459) 0:00:05.791 ****** 2025-09-19 11:27:16.546487 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:16.546498 | orchestrator | 2025-09-19 11:27:16.546509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:16.546520 | orchestrator | Friday 19 September 2025 11:27:15 +0000 (0:00:00.234) 0:00:06.026 ****** 2025-09-19 11:27:16.546530 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:16.546541 | orchestrator | 2025-09-19 11:27:16.546551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:16.546562 | orchestrator | Friday 19 September 2025 11:27:15 +0000 (0:00:00.231) 0:00:06.257 ****** 2025-09-19 11:27:16.546631 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:16.546643 | orchestrator | 2025-09-19 11:27:16.546653 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:16.546664 | orchestrator | Friday 19 September 2025 11:27:15 +0000 (0:00:00.204) 0:00:06.461 ****** 2025-09-19 11:27:16.546675 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:16.546685 | orchestrator | 2025-09-19 11:27:16.546696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:16.546715 | orchestrator | Friday 19 September 2025 11:27:15 +0000 (0:00:00.200) 0:00:06.662 ****** 2025-09-19 11:27:16.546726 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:16.546736 | orchestrator | 2025-09-19 11:27:16.546747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:16.546758 | orchestrator | Friday 19 September 2025 11:27:15 +0000 (0:00:00.204) 0:00:06.866 ****** 2025-09-19 11:27:16.546768 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:16.546779 | orchestrator | 2025-09-19 11:27:16.546790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:16.546801 | orchestrator | Friday 19 September 2025 11:27:16 +0000 (0:00:00.201) 0:00:07.068 ****** 2025-09-19 11:27:16.546811 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:16.546822 | orchestrator | 2025-09-19 11:27:16.546833 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:16.546844 | orchestrator | Friday 19 September 2025 11:27:16 +0000 (0:00:00.198) 0:00:07.267 ****** 2025-09-19 11:27:16.546861 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.897924 | orchestrator | 2025-09-19 11:27:24.898075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:24.898103 | orchestrator | Friday 19 September 2025 11:27:16 +0000 (0:00:00.197) 0:00:07.465 ****** 2025-09-19 11:27:24.898125 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-19 11:27:24.898146 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-19 11:27:24.898163 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-19 11:27:24.898174 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-19 11:27:24.898185 | orchestrator | 2025-09-19 11:27:24.898200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:24.898218 | orchestrator | Friday 19 September 2025 11:27:17 +0000 (0:00:01.111) 0:00:08.576 ****** 2025-09-19 11:27:24.898244 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.898266 | orchestrator | 2025-09-19 11:27:24.898284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:24.898302 | orchestrator | Friday 19 September 2025 11:27:17 +0000 (0:00:00.208) 0:00:08.784 ****** 2025-09-19 11:27:24.898320 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.898338 | orchestrator | 2025-09-19 11:27:24.898358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:24.898377 | orchestrator | Friday 19 September 2025 11:27:18 +0000 (0:00:00.210) 0:00:08.995 ****** 2025-09-19 11:27:24.898393 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.898404 | orchestrator | 2025-09-19 11:27:24.898415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:24.898427 | orchestrator | Friday 19 September 2025 11:27:18 +0000 (0:00:00.232) 0:00:09.227 ****** 2025-09-19 11:27:24.898438 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.898449 | orchestrator | 2025-09-19 11:27:24.898460 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 11:27:24.898472 | orchestrator | Friday 19 September 2025 11:27:18 +0000 (0:00:00.239) 0:00:09.467 ****** 2025-09-19 11:27:24.898484 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.898497 | orchestrator | 2025-09-19 11:27:24.898509 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 11:27:24.898521 | orchestrator | Friday 19 September 2025 11:27:18 +0000 (0:00:00.142) 0:00:09.609 ****** 2025-09-19 11:27:24.898533 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c75d7215-6866-5647-89df-878c4666c32d'}}) 2025-09-19 11:27:24.898546 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'}}) 2025-09-19 11:27:24.898585 | orchestrator | 2025-09-19 11:27:24.898598 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 11:27:24.898611 | orchestrator | Friday 19 September 2025 11:27:18 +0000 (0:00:00.192) 0:00:09.802 ****** 2025-09-19 11:27:24.898624 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'}) 2025-09-19 11:27:24.898655 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'}) 2025-09-19 11:27:24.898668 | orchestrator | 2025-09-19 11:27:24.898694 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 11:27:24.898715 | orchestrator | Friday 19 September 2025 11:27:20 +0000 (0:00:02.059) 0:00:11.861 ****** 2025-09-19 11:27:24.898728 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:24.898742 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:24.898754 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.898766 | orchestrator | 2025-09-19 11:27:24.898779 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 11:27:24.898791 | orchestrator | Friday 19 September 2025 11:27:21 +0000 (0:00:00.200) 0:00:12.061 ****** 2025-09-19 11:27:24.898803 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'}) 2025-09-19 11:27:24.898815 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'}) 2025-09-19 11:27:24.898827 | orchestrator | 2025-09-19 11:27:24.898838 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 11:27:24.898849 | orchestrator | Friday 19 September 2025 11:27:22 +0000 (0:00:01.569) 0:00:13.631 ****** 2025-09-19 11:27:24.898859 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:24.898871 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:24.898882 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.898893 | orchestrator | 2025-09-19 11:27:24.898904 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 11:27:24.898915 | orchestrator | Friday 19 September 2025 11:27:22 +0000 (0:00:00.170) 0:00:13.801 ****** 2025-09-19 11:27:24.898926 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.898936 | orchestrator | 2025-09-19 11:27:24.898947 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 11:27:24.898976 | orchestrator | Friday 19 September 2025 11:27:23 +0000 (0:00:00.142) 0:00:13.944 ****** 2025-09-19 11:27:24.898987 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:24.898999 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:24.899010 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.899021 | orchestrator | 2025-09-19 11:27:24.899031 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 11:27:24.899042 | orchestrator | Friday 19 September 2025 11:27:23 +0000 (0:00:00.391) 0:00:14.336 ****** 2025-09-19 11:27:24.899053 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.899064 | orchestrator | 2025-09-19 11:27:24.899075 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 11:27:24.899086 | orchestrator | Friday 19 September 2025 11:27:23 +0000 (0:00:00.198) 0:00:14.535 ****** 2025-09-19 11:27:24.899097 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:24.899116 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:24.899127 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.899138 | orchestrator | 2025-09-19 11:27:24.899148 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 11:27:24.899159 | orchestrator | Friday 19 September 2025 11:27:23 +0000 (0:00:00.171) 0:00:14.706 ****** 2025-09-19 11:27:24.899170 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.899181 | orchestrator | 2025-09-19 11:27:24.899194 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 11:27:24.899213 | orchestrator | Friday 19 September 2025 11:27:23 +0000 (0:00:00.113) 0:00:14.819 ****** 2025-09-19 11:27:24.899231 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:24.899250 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:24.899267 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.899287 | orchestrator | 2025-09-19 11:27:24.899305 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 11:27:24.899324 | orchestrator | Friday 19 September 2025 11:27:24 +0000 (0:00:00.138) 0:00:14.958 ****** 2025-09-19 11:27:24.899335 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:27:24.899346 | orchestrator | 2025-09-19 11:27:24.899357 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 11:27:24.899368 | orchestrator | Friday 19 September 2025 11:27:24 +0000 (0:00:00.131) 0:00:15.090 ****** 2025-09-19 11:27:24.899384 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:24.899395 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:24.899406 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.899417 | orchestrator | 2025-09-19 11:27:24.899427 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 11:27:24.899438 | orchestrator | Friday 19 September 2025 11:27:24 +0000 (0:00:00.155) 0:00:15.245 ****** 2025-09-19 11:27:24.899448 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:24.899459 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:24.899470 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.899480 | orchestrator | 2025-09-19 11:27:24.899491 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 11:27:24.899502 | orchestrator | Friday 19 September 2025 11:27:24 +0000 (0:00:00.157) 0:00:15.403 ****** 2025-09-19 11:27:24.899512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:24.899523 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:24.899534 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.899545 | orchestrator | 2025-09-19 11:27:24.899573 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 11:27:24.899584 | orchestrator | Friday 19 September 2025 11:27:24 +0000 (0:00:00.150) 0:00:15.554 ****** 2025-09-19 11:27:24.899595 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.899613 | orchestrator | 2025-09-19 11:27:24.899624 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 11:27:24.899635 | orchestrator | Friday 19 September 2025 11:27:24 +0000 (0:00:00.140) 0:00:15.694 ****** 2025-09-19 11:27:24.899646 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:24.899657 | orchestrator | 2025-09-19 11:27:24.899674 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 11:27:31.062292 | orchestrator | Friday 19 September 2025 11:27:24 +0000 (0:00:00.126) 0:00:15.820 ****** 2025-09-19 11:27:31.062382 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.062393 | orchestrator | 2025-09-19 11:27:31.062401 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 11:27:31.062408 | orchestrator | Friday 19 September 2025 11:27:25 +0000 (0:00:00.131) 0:00:15.951 ****** 2025-09-19 11:27:31.062416 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 11:27:31.062424 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 11:27:31.062431 | orchestrator | } 2025-09-19 11:27:31.062439 | orchestrator | 2025-09-19 11:27:31.062446 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 11:27:31.062453 | orchestrator | Friday 19 September 2025 11:27:25 +0000 (0:00:00.265) 0:00:16.217 ****** 2025-09-19 11:27:31.062460 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 11:27:31.062467 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 11:27:31.062474 | orchestrator | } 2025-09-19 11:27:31.062481 | orchestrator | 2025-09-19 11:27:31.062488 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 11:27:31.062495 | orchestrator | Friday 19 September 2025 11:27:25 +0000 (0:00:00.126) 0:00:16.343 ****** 2025-09-19 11:27:31.062502 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 11:27:31.062509 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 11:27:31.062516 | orchestrator | } 2025-09-19 11:27:31.062523 | orchestrator | 2025-09-19 11:27:31.062530 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 11:27:31.062536 | orchestrator | Friday 19 September 2025 11:27:25 +0000 (0:00:00.133) 0:00:16.477 ****** 2025-09-19 11:27:31.062542 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:27:31.062599 | orchestrator | 2025-09-19 11:27:31.062606 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 11:27:31.062612 | orchestrator | Friday 19 September 2025 11:27:26 +0000 (0:00:00.623) 0:00:17.100 ****** 2025-09-19 11:27:31.062618 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:27:31.062624 | orchestrator | 2025-09-19 11:27:31.062630 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 11:27:31.062636 | orchestrator | Friday 19 September 2025 11:27:26 +0000 (0:00:00.564) 0:00:17.665 ****** 2025-09-19 11:27:31.062642 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:27:31.062648 | orchestrator | 2025-09-19 11:27:31.062654 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 11:27:31.062660 | orchestrator | Friday 19 September 2025 11:27:27 +0000 (0:00:00.547) 0:00:18.213 ****** 2025-09-19 11:27:31.062666 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:27:31.062673 | orchestrator | 2025-09-19 11:27:31.062679 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 11:27:31.062686 | orchestrator | Friday 19 September 2025 11:27:27 +0000 (0:00:00.150) 0:00:18.363 ****** 2025-09-19 11:27:31.062693 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.062700 | orchestrator | 2025-09-19 11:27:31.062707 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 11:27:31.062713 | orchestrator | Friday 19 September 2025 11:27:27 +0000 (0:00:00.132) 0:00:18.496 ****** 2025-09-19 11:27:31.062719 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.062725 | orchestrator | 2025-09-19 11:27:31.062730 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 11:27:31.062737 | orchestrator | Friday 19 September 2025 11:27:27 +0000 (0:00:00.120) 0:00:18.616 ****** 2025-09-19 11:27:31.062765 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 11:27:31.062772 | orchestrator |  "vgs_report": { 2025-09-19 11:27:31.062779 | orchestrator |  "vg": [] 2025-09-19 11:27:31.062786 | orchestrator |  } 2025-09-19 11:27:31.062792 | orchestrator | } 2025-09-19 11:27:31.062798 | orchestrator | 2025-09-19 11:27:31.062805 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 11:27:31.062812 | orchestrator | Friday 19 September 2025 11:27:27 +0000 (0:00:00.143) 0:00:18.760 ****** 2025-09-19 11:27:31.062818 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.062824 | orchestrator | 2025-09-19 11:27:31.062830 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 11:27:31.062837 | orchestrator | Friday 19 September 2025 11:27:27 +0000 (0:00:00.132) 0:00:18.893 ****** 2025-09-19 11:27:31.062843 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.062850 | orchestrator | 2025-09-19 11:27:31.062857 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 11:27:31.062863 | orchestrator | Friday 19 September 2025 11:27:28 +0000 (0:00:00.144) 0:00:19.038 ****** 2025-09-19 11:27:31.062870 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.062877 | orchestrator | 2025-09-19 11:27:31.062883 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 11:27:31.062890 | orchestrator | Friday 19 September 2025 11:27:28 +0000 (0:00:00.403) 0:00:19.441 ****** 2025-09-19 11:27:31.062897 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.062903 | orchestrator | 2025-09-19 11:27:31.062910 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 11:27:31.062917 | orchestrator | Friday 19 September 2025 11:27:28 +0000 (0:00:00.141) 0:00:19.583 ****** 2025-09-19 11:27:31.062923 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.062930 | orchestrator | 2025-09-19 11:27:31.062953 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 11:27:31.062961 | orchestrator | Friday 19 September 2025 11:27:28 +0000 (0:00:00.138) 0:00:19.722 ****** 2025-09-19 11:27:31.062968 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.062975 | orchestrator | 2025-09-19 11:27:31.062980 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 11:27:31.062987 | orchestrator | Friday 19 September 2025 11:27:28 +0000 (0:00:00.117) 0:00:19.840 ****** 2025-09-19 11:27:31.062993 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.063000 | orchestrator | 2025-09-19 11:27:31.063007 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 11:27:31.063014 | orchestrator | Friday 19 September 2025 11:27:29 +0000 (0:00:00.126) 0:00:19.966 ****** 2025-09-19 11:27:31.063020 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.063027 | orchestrator | 2025-09-19 11:27:31.063033 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 11:27:31.063058 | orchestrator | Friday 19 September 2025 11:27:29 +0000 (0:00:00.112) 0:00:20.078 ****** 2025-09-19 11:27:31.063065 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.063071 | orchestrator | 2025-09-19 11:27:31.063077 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 11:27:31.063084 | orchestrator | Friday 19 September 2025 11:27:29 +0000 (0:00:00.124) 0:00:20.202 ****** 2025-09-19 11:27:31.063090 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.063096 | orchestrator | 2025-09-19 11:27:31.063103 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 11:27:31.063110 | orchestrator | Friday 19 September 2025 11:27:29 +0000 (0:00:00.143) 0:00:20.346 ****** 2025-09-19 11:27:31.063117 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.063125 | orchestrator | 2025-09-19 11:27:31.063132 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 11:27:31.063139 | orchestrator | Friday 19 September 2025 11:27:29 +0000 (0:00:00.107) 0:00:20.454 ****** 2025-09-19 11:27:31.063146 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.063152 | orchestrator | 2025-09-19 11:27:31.063168 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 11:27:31.063175 | orchestrator | Friday 19 September 2025 11:27:29 +0000 (0:00:00.126) 0:00:20.580 ****** 2025-09-19 11:27:31.063181 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.063188 | orchestrator | 2025-09-19 11:27:31.063195 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 11:27:31.063202 | orchestrator | Friday 19 September 2025 11:27:29 +0000 (0:00:00.138) 0:00:20.719 ****** 2025-09-19 11:27:31.063208 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.063214 | orchestrator | 2025-09-19 11:27:31.063221 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 11:27:31.063228 | orchestrator | Friday 19 September 2025 11:27:29 +0000 (0:00:00.132) 0:00:20.851 ****** 2025-09-19 11:27:31.063236 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:31.063245 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:31.063251 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.063258 | orchestrator | 2025-09-19 11:27:31.063265 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 11:27:31.063272 | orchestrator | Friday 19 September 2025 11:27:30 +0000 (0:00:00.285) 0:00:21.137 ****** 2025-09-19 11:27:31.063278 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:31.063284 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:31.063290 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.063296 | orchestrator | 2025-09-19 11:27:31.063303 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 11:27:31.063309 | orchestrator | Friday 19 September 2025 11:27:30 +0000 (0:00:00.139) 0:00:21.277 ****** 2025-09-19 11:27:31.063321 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:31.063328 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:31.063336 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.063342 | orchestrator | 2025-09-19 11:27:31.063348 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 11:27:31.063354 | orchestrator | Friday 19 September 2025 11:27:30 +0000 (0:00:00.133) 0:00:21.410 ****** 2025-09-19 11:27:31.063360 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:31.063367 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:31.063373 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.063380 | orchestrator | 2025-09-19 11:27:31.063386 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 11:27:31.063393 | orchestrator | Friday 19 September 2025 11:27:30 +0000 (0:00:00.143) 0:00:21.553 ****** 2025-09-19 11:27:31.063400 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:31.063406 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:31.063413 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:31.063425 | orchestrator | 2025-09-19 11:27:31.063432 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 11:27:31.063438 | orchestrator | Friday 19 September 2025 11:27:30 +0000 (0:00:00.195) 0:00:21.749 ****** 2025-09-19 11:27:31.063444 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:31.063458 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:36.659971 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:36.660060 | orchestrator | 2025-09-19 11:27:36.660075 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 11:27:36.660087 | orchestrator | Friday 19 September 2025 11:27:31 +0000 (0:00:00.230) 0:00:21.979 ****** 2025-09-19 11:27:36.660099 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:36.660111 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:36.660122 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:36.660133 | orchestrator | 2025-09-19 11:27:36.660144 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 11:27:36.660155 | orchestrator | Friday 19 September 2025 11:27:31 +0000 (0:00:00.154) 0:00:22.134 ****** 2025-09-19 11:27:36.660166 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:36.660177 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:36.660188 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:36.660199 | orchestrator | 2025-09-19 11:27:36.660210 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 11:27:36.660221 | orchestrator | Friday 19 September 2025 11:27:31 +0000 (0:00:00.202) 0:00:22.336 ****** 2025-09-19 11:27:36.660231 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:27:36.660243 | orchestrator | 2025-09-19 11:27:36.660253 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 11:27:36.660264 | orchestrator | Friday 19 September 2025 11:27:31 +0000 (0:00:00.563) 0:00:22.900 ****** 2025-09-19 11:27:36.660275 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:27:36.660286 | orchestrator | 2025-09-19 11:27:36.660296 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 11:27:36.660307 | orchestrator | Friday 19 September 2025 11:27:32 +0000 (0:00:00.544) 0:00:23.445 ****** 2025-09-19 11:27:36.660318 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:27:36.660328 | orchestrator | 2025-09-19 11:27:36.660339 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 11:27:36.660350 | orchestrator | Friday 19 September 2025 11:27:32 +0000 (0:00:00.173) 0:00:23.618 ****** 2025-09-19 11:27:36.660361 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'vg_name': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'}) 2025-09-19 11:27:36.660372 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'vg_name': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'}) 2025-09-19 11:27:36.660383 | orchestrator | 2025-09-19 11:27:36.660394 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 11:27:36.660405 | orchestrator | Friday 19 September 2025 11:27:32 +0000 (0:00:00.245) 0:00:23.864 ****** 2025-09-19 11:27:36.660416 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:36.660446 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:36.660458 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:36.660468 | orchestrator | 2025-09-19 11:27:36.660479 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 11:27:36.660490 | orchestrator | Friday 19 September 2025 11:27:33 +0000 (0:00:00.464) 0:00:24.329 ****** 2025-09-19 11:27:36.660501 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:36.660512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:36.660523 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:36.660535 | orchestrator | 2025-09-19 11:27:36.660592 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 11:27:36.660604 | orchestrator | Friday 19 September 2025 11:27:33 +0000 (0:00:00.153) 0:00:24.483 ****** 2025-09-19 11:27:36.660617 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'})  2025-09-19 11:27:36.660629 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'})  2025-09-19 11:27:36.660642 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:36.660654 | orchestrator | 2025-09-19 11:27:36.660666 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 11:27:36.660678 | orchestrator | Friday 19 September 2025 11:27:33 +0000 (0:00:00.159) 0:00:24.642 ****** 2025-09-19 11:27:36.660690 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 11:27:36.660702 | orchestrator |  "lvm_report": { 2025-09-19 11:27:36.660715 | orchestrator |  "lv": [ 2025-09-19 11:27:36.660728 | orchestrator |  { 2025-09-19 11:27:36.660755 | orchestrator |  "lv_name": "osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0", 2025-09-19 11:27:36.660769 | orchestrator |  "vg_name": "ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0" 2025-09-19 11:27:36.660781 | orchestrator |  }, 2025-09-19 11:27:36.660793 | orchestrator |  { 2025-09-19 11:27:36.660805 | orchestrator |  "lv_name": "osd-block-c75d7215-6866-5647-89df-878c4666c32d", 2025-09-19 11:27:36.660817 | orchestrator |  "vg_name": "ceph-c75d7215-6866-5647-89df-878c4666c32d" 2025-09-19 11:27:36.660830 | orchestrator |  } 2025-09-19 11:27:36.660841 | orchestrator |  ], 2025-09-19 11:27:36.660853 | orchestrator |  "pv": [ 2025-09-19 11:27:36.660865 | orchestrator |  { 2025-09-19 11:27:36.660878 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 11:27:36.660890 | orchestrator |  "vg_name": "ceph-c75d7215-6866-5647-89df-878c4666c32d" 2025-09-19 11:27:36.660901 | orchestrator |  }, 2025-09-19 11:27:36.660912 | orchestrator |  { 2025-09-19 11:27:36.660922 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 11:27:36.660933 | orchestrator |  "vg_name": "ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0" 2025-09-19 11:27:36.660944 | orchestrator |  } 2025-09-19 11:27:36.660955 | orchestrator |  ] 2025-09-19 11:27:36.660966 | orchestrator |  } 2025-09-19 11:27:36.660977 | orchestrator | } 2025-09-19 11:27:36.660988 | orchestrator | 2025-09-19 11:27:36.660999 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 11:27:36.661010 | orchestrator | 2025-09-19 11:27:36.661021 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 11:27:36.661032 | orchestrator | Friday 19 September 2025 11:27:34 +0000 (0:00:00.289) 0:00:24.932 ****** 2025-09-19 11:27:36.661043 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 11:27:36.661064 | orchestrator | 2025-09-19 11:27:36.661076 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 11:27:36.661086 | orchestrator | Friday 19 September 2025 11:27:34 +0000 (0:00:00.257) 0:00:25.189 ****** 2025-09-19 11:27:36.661097 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:27:36.661108 | orchestrator | 2025-09-19 11:27:36.661119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:36.661130 | orchestrator | Friday 19 September 2025 11:27:34 +0000 (0:00:00.234) 0:00:25.424 ****** 2025-09-19 11:27:36.661155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-19 11:27:36.661167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-19 11:27:36.661178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-19 11:27:36.661188 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-19 11:27:36.661199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-19 11:27:36.661211 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-19 11:27:36.661221 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-19 11:27:36.661237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-19 11:27:36.661248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-19 11:27:36.661259 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-19 11:27:36.661270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-19 11:27:36.661281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-19 11:27:36.661291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-19 11:27:36.661302 | orchestrator | 2025-09-19 11:27:36.661313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:36.661324 | orchestrator | Friday 19 September 2025 11:27:34 +0000 (0:00:00.454) 0:00:25.878 ****** 2025-09-19 11:27:36.661335 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:36.661346 | orchestrator | 2025-09-19 11:27:36.661356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:36.661367 | orchestrator | Friday 19 September 2025 11:27:35 +0000 (0:00:00.204) 0:00:26.082 ****** 2025-09-19 11:27:36.661378 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:36.661389 | orchestrator | 2025-09-19 11:27:36.661399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:36.661410 | orchestrator | Friday 19 September 2025 11:27:35 +0000 (0:00:00.196) 0:00:26.278 ****** 2025-09-19 11:27:36.661421 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:36.661432 | orchestrator | 2025-09-19 11:27:36.661442 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:36.661453 | orchestrator | Friday 19 September 2025 11:27:35 +0000 (0:00:00.577) 0:00:26.855 ****** 2025-09-19 11:27:36.661464 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:36.661475 | orchestrator | 2025-09-19 11:27:36.661485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:36.661496 | orchestrator | Friday 19 September 2025 11:27:36 +0000 (0:00:00.182) 0:00:27.038 ****** 2025-09-19 11:27:36.661507 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:36.661518 | orchestrator | 2025-09-19 11:27:36.661528 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:36.661594 | orchestrator | Friday 19 September 2025 11:27:36 +0000 (0:00:00.175) 0:00:27.213 ****** 2025-09-19 11:27:36.661607 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:36.661618 | orchestrator | 2025-09-19 11:27:36.661637 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:36.661648 | orchestrator | Friday 19 September 2025 11:27:36 +0000 (0:00:00.190) 0:00:27.404 ****** 2025-09-19 11:27:36.661659 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:36.661670 | orchestrator | 2025-09-19 11:27:36.661689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:46.214058 | orchestrator | Friday 19 September 2025 11:27:36 +0000 (0:00:00.178) 0:00:27.583 ****** 2025-09-19 11:27:46.214120 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:46.214131 | orchestrator | 2025-09-19 11:27:46.214139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:46.214146 | orchestrator | Friday 19 September 2025 11:27:36 +0000 (0:00:00.174) 0:00:27.757 ****** 2025-09-19 11:27:46.214153 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03) 2025-09-19 11:27:46.214161 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03) 2025-09-19 11:27:46.214167 | orchestrator | 2025-09-19 11:27:46.214174 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:46.214180 | orchestrator | Friday 19 September 2025 11:27:37 +0000 (0:00:00.399) 0:00:28.156 ****** 2025-09-19 11:27:46.214187 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b4727c68-ff73-4ff9-aa8c-694157ecb2dd) 2025-09-19 11:27:46.214194 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b4727c68-ff73-4ff9-aa8c-694157ecb2dd) 2025-09-19 11:27:46.214201 | orchestrator | 2025-09-19 11:27:46.214208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:46.214214 | orchestrator | Friday 19 September 2025 11:27:37 +0000 (0:00:00.385) 0:00:28.542 ****** 2025-09-19 11:27:46.214221 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_39dbe9ae-8bf0-4e12-9ca8-c59aebdbd1f7) 2025-09-19 11:27:46.214227 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_39dbe9ae-8bf0-4e12-9ca8-c59aebdbd1f7) 2025-09-19 11:27:46.214234 | orchestrator | 2025-09-19 11:27:46.214240 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:46.214247 | orchestrator | Friday 19 September 2025 11:27:38 +0000 (0:00:00.384) 0:00:28.926 ****** 2025-09-19 11:27:46.214254 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3322ab10-28f2-47f3-9821-bfcea3cb9d1d) 2025-09-19 11:27:46.214260 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3322ab10-28f2-47f3-9821-bfcea3cb9d1d) 2025-09-19 11:27:46.214267 | orchestrator | 2025-09-19 11:27:46.214274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:27:46.214280 | orchestrator | Friday 19 September 2025 11:27:38 +0000 (0:00:00.348) 0:00:29.275 ****** 2025-09-19 11:27:46.214287 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 11:27:46.214293 | orchestrator | 2025-09-19 11:27:46.214300 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:46.214306 | orchestrator | Friday 19 September 2025 11:27:38 +0000 (0:00:00.302) 0:00:29.578 ****** 2025-09-19 11:27:46.214313 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-19 11:27:46.214330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-19 11:27:46.214337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-19 11:27:46.214343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-19 11:27:46.214350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-19 11:27:46.214356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-19 11:27:46.214362 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-19 11:27:46.214380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-19 11:27:46.214387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-19 11:27:46.214393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-19 11:27:46.214399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-19 11:27:46.214405 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-19 11:27:46.214411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-19 11:27:46.214417 | orchestrator | 2025-09-19 11:27:46.214423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:46.214429 | orchestrator | Friday 19 September 2025 11:27:39 +0000 (0:00:00.512) 0:00:30.091 ****** 2025-09-19 11:27:46.214435 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:46.214442 | orchestrator | 2025-09-19 11:27:46.214449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:46.214455 | orchestrator | Friday 19 September 2025 11:27:39 +0000 (0:00:00.199) 0:00:30.290 ****** 2025-09-19 11:27:46.214461 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:46.214468 | orchestrator | 2025-09-19 11:27:46.214474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:46.214479 | orchestrator | Friday 19 September 2025 11:27:39 +0000 (0:00:00.196) 0:00:30.487 ****** 2025-09-19 11:27:46.214484 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:46.214491 | orchestrator | 2025-09-19 11:27:46.214497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:46.214503 | orchestrator | Friday 19 September 2025 11:27:39 +0000 (0:00:00.179) 0:00:30.667 ****** 2025-09-19 11:27:46.214509 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:46.214515 | orchestrator | 2025-09-19 11:27:46.214563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:46.214572 | orchestrator | Friday 19 September 2025 11:27:39 +0000 (0:00:00.178) 0:00:30.845 ****** 2025-09-19 11:27:46.214578 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:46.214585 | orchestrator | 2025-09-19 11:27:46.214592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:46.214598 | orchestrator | Friday 19 September 2025 11:27:40 +0000 (0:00:00.177) 0:00:31.022 ****** 2025-09-19 11:27:46.214604 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:46.214611 | orchestrator | 2025-09-19 11:27:46.214617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:46.214624 | orchestrator | Friday 19 September 2025 11:27:40 +0000 (0:00:00.188) 0:00:31.211 ****** 2025-09-19 11:27:46.214631 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:46.214637 | orchestrator | 2025-09-19 11:27:46.214644 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:46.214651 | orchestrator | Friday 19 September 2025 11:27:40 +0000 (0:00:00.181) 0:00:31.393 ****** 2025-09-19 11:27:46.214658 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:46.214664 | orchestrator | 2025-09-19 11:27:46.214671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:46.214678 | orchestrator | Friday 19 September 2025 11:27:40 +0000 (0:00:00.185) 0:00:31.579 ****** 2025-09-19 11:27:46.214684 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-19 11:27:46.214690 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-19 11:27:46.214697 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-19 11:27:46.214703 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-19 11:27:46.214709 | orchestrator | 2025-09-19 11:27:46.214715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:46.214720 | orchestrator | Friday 19 September 2025 11:27:41 +0000 (0:00:00.726) 0:00:32.305 ****** 2025-09-19 11:27:46.214734 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:46.214740 | orchestrator | 2025-09-19 11:27:46.214746 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:46.214753 | orchestrator | Friday 19 September 2025 11:27:41 +0000 (0:00:00.217) 0:00:32.523 ****** 2025-09-19 11:27:46.214761 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:46.214770 | orchestrator | 2025-09-19 11:27:46.214779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:46.214787 | orchestrator | Friday 19 September 2025 11:27:41 +0000 (0:00:00.176) 0:00:32.699 ****** 2025-09-19 11:27:46.214797 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:46.214805 | orchestrator | 2025-09-19 11:27:46.214814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:27:46.214823 | orchestrator | Friday 19 September 2025 11:27:42 +0000 (0:00:00.481) 0:00:33.181 ****** 2025-09-19 11:27:46.214832 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:46.214841 | orchestrator | 2025-09-19 11:27:46.214850 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 11:27:46.214859 | orchestrator | Friday 19 September 2025 11:27:42 +0000 (0:00:00.166) 0:00:33.348 ****** 2025-09-19 11:27:46.214868 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:46.214877 | orchestrator | 2025-09-19 11:27:46.214886 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 11:27:46.214896 | orchestrator | Friday 19 September 2025 11:27:42 +0000 (0:00:00.128) 0:00:33.476 ****** 2025-09-19 11:27:46.214904 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ac676d1d-4f4c-546f-a12f-f85171bcd1d7'}}) 2025-09-19 11:27:46.214913 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ffd16df6-6207-59ff-a831-a7eb6df6d5c2'}}) 2025-09-19 11:27:46.214919 | orchestrator | 2025-09-19 11:27:46.214925 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 11:27:46.214931 | orchestrator | Friday 19 September 2025 11:27:42 +0000 (0:00:00.193) 0:00:33.669 ****** 2025-09-19 11:27:46.214939 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'}) 2025-09-19 11:27:46.214949 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'}) 2025-09-19 11:27:46.214957 | orchestrator | 2025-09-19 11:27:46.214965 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 11:27:46.214974 | orchestrator | Friday 19 September 2025 11:27:44 +0000 (0:00:01.998) 0:00:35.668 ****** 2025-09-19 11:27:46.214983 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:46.214993 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:46.215001 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:46.215010 | orchestrator | 2025-09-19 11:27:46.215019 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 11:27:46.215028 | orchestrator | Friday 19 September 2025 11:27:44 +0000 (0:00:00.147) 0:00:35.815 ****** 2025-09-19 11:27:46.215036 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'}) 2025-09-19 11:27:46.215045 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'}) 2025-09-19 11:27:46.215054 | orchestrator | 2025-09-19 11:27:46.215069 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 11:27:51.410268 | orchestrator | Friday 19 September 2025 11:27:46 +0000 (0:00:01.320) 0:00:37.136 ****** 2025-09-19 11:27:51.410371 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:51.410386 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:51.410398 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.410410 | orchestrator | 2025-09-19 11:27:51.410421 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 11:27:51.410432 | orchestrator | Friday 19 September 2025 11:27:46 +0000 (0:00:00.138) 0:00:37.275 ****** 2025-09-19 11:27:51.410443 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.410454 | orchestrator | 2025-09-19 11:27:51.410464 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 11:27:51.410476 | orchestrator | Friday 19 September 2025 11:27:46 +0000 (0:00:00.140) 0:00:37.415 ****** 2025-09-19 11:27:51.410487 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:51.410512 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:51.410561 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.410573 | orchestrator | 2025-09-19 11:27:51.410583 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 11:27:51.410594 | orchestrator | Friday 19 September 2025 11:27:46 +0000 (0:00:00.121) 0:00:37.536 ****** 2025-09-19 11:27:51.410605 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.410616 | orchestrator | 2025-09-19 11:27:51.410626 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 11:27:51.410638 | orchestrator | Friday 19 September 2025 11:27:46 +0000 (0:00:00.124) 0:00:37.661 ****** 2025-09-19 11:27:51.410649 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:51.410660 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:51.410671 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.410681 | orchestrator | 2025-09-19 11:27:51.410692 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 11:27:51.410703 | orchestrator | Friday 19 September 2025 11:27:46 +0000 (0:00:00.147) 0:00:37.808 ****** 2025-09-19 11:27:51.410718 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.410729 | orchestrator | 2025-09-19 11:27:51.410740 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 11:27:51.410751 | orchestrator | Friday 19 September 2025 11:27:47 +0000 (0:00:00.243) 0:00:38.052 ****** 2025-09-19 11:27:51.410762 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:51.410773 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:51.410784 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.410794 | orchestrator | 2025-09-19 11:27:51.410805 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 11:27:51.410816 | orchestrator | Friday 19 September 2025 11:27:47 +0000 (0:00:00.158) 0:00:38.210 ****** 2025-09-19 11:27:51.410828 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:27:51.410840 | orchestrator | 2025-09-19 11:27:51.410852 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 11:27:51.410864 | orchestrator | Friday 19 September 2025 11:27:47 +0000 (0:00:00.152) 0:00:38.363 ****** 2025-09-19 11:27:51.410886 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:51.410899 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:51.410912 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.410923 | orchestrator | 2025-09-19 11:27:51.410935 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 11:27:51.410947 | orchestrator | Friday 19 September 2025 11:27:47 +0000 (0:00:00.156) 0:00:38.520 ****** 2025-09-19 11:27:51.410959 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:51.410971 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:51.410983 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.410994 | orchestrator | 2025-09-19 11:27:51.411006 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 11:27:51.411018 | orchestrator | Friday 19 September 2025 11:27:47 +0000 (0:00:00.142) 0:00:38.662 ****** 2025-09-19 11:27:51.411047 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:51.411060 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:51.411073 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.411084 | orchestrator | 2025-09-19 11:27:51.411096 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 11:27:51.411108 | orchestrator | Friday 19 September 2025 11:27:47 +0000 (0:00:00.165) 0:00:38.827 ****** 2025-09-19 11:27:51.411120 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.411132 | orchestrator | 2025-09-19 11:27:51.411144 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 11:27:51.411156 | orchestrator | Friday 19 September 2025 11:27:48 +0000 (0:00:00.140) 0:00:38.968 ****** 2025-09-19 11:27:51.411168 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.411179 | orchestrator | 2025-09-19 11:27:51.411190 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 11:27:51.411201 | orchestrator | Friday 19 September 2025 11:27:48 +0000 (0:00:00.134) 0:00:39.103 ****** 2025-09-19 11:27:51.411212 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.411223 | orchestrator | 2025-09-19 11:27:51.411234 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 11:27:51.411246 | orchestrator | Friday 19 September 2025 11:27:48 +0000 (0:00:00.133) 0:00:39.236 ****** 2025-09-19 11:27:51.411257 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 11:27:51.411268 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 11:27:51.411280 | orchestrator | } 2025-09-19 11:27:51.411291 | orchestrator | 2025-09-19 11:27:51.411302 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 11:27:51.411313 | orchestrator | Friday 19 September 2025 11:27:48 +0000 (0:00:00.131) 0:00:39.368 ****** 2025-09-19 11:27:51.411324 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 11:27:51.411335 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 11:27:51.411346 | orchestrator | } 2025-09-19 11:27:51.411357 | orchestrator | 2025-09-19 11:27:51.411368 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 11:27:51.411379 | orchestrator | Friday 19 September 2025 11:27:48 +0000 (0:00:00.128) 0:00:39.496 ****** 2025-09-19 11:27:51.411390 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 11:27:51.411401 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 11:27:51.411419 | orchestrator | } 2025-09-19 11:27:51.411431 | orchestrator | 2025-09-19 11:27:51.411442 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 11:27:51.411453 | orchestrator | Friday 19 September 2025 11:27:48 +0000 (0:00:00.134) 0:00:39.630 ****** 2025-09-19 11:27:51.411464 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:27:51.411475 | orchestrator | 2025-09-19 11:27:51.411486 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 11:27:51.411497 | orchestrator | Friday 19 September 2025 11:27:49 +0000 (0:00:00.660) 0:00:40.290 ****** 2025-09-19 11:27:51.411512 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:27:51.411544 | orchestrator | 2025-09-19 11:27:51.411556 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 11:27:51.411567 | orchestrator | Friday 19 September 2025 11:27:49 +0000 (0:00:00.520) 0:00:40.811 ****** 2025-09-19 11:27:51.411578 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:27:51.411589 | orchestrator | 2025-09-19 11:27:51.411600 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 11:27:51.411612 | orchestrator | Friday 19 September 2025 11:27:50 +0000 (0:00:00.528) 0:00:41.339 ****** 2025-09-19 11:27:51.411623 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:27:51.411634 | orchestrator | 2025-09-19 11:27:51.411645 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 11:27:51.411656 | orchestrator | Friday 19 September 2025 11:27:50 +0000 (0:00:00.138) 0:00:41.478 ****** 2025-09-19 11:27:51.411667 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.411678 | orchestrator | 2025-09-19 11:27:51.411689 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 11:27:51.411699 | orchestrator | Friday 19 September 2025 11:27:50 +0000 (0:00:00.094) 0:00:41.572 ****** 2025-09-19 11:27:51.411710 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.411722 | orchestrator | 2025-09-19 11:27:51.411733 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 11:27:51.411744 | orchestrator | Friday 19 September 2025 11:27:50 +0000 (0:00:00.100) 0:00:41.673 ****** 2025-09-19 11:27:51.411755 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 11:27:51.411766 | orchestrator |  "vgs_report": { 2025-09-19 11:27:51.411778 | orchestrator |  "vg": [] 2025-09-19 11:27:51.411789 | orchestrator |  } 2025-09-19 11:27:51.411800 | orchestrator | } 2025-09-19 11:27:51.411811 | orchestrator | 2025-09-19 11:27:51.411822 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 11:27:51.411833 | orchestrator | Friday 19 September 2025 11:27:50 +0000 (0:00:00.134) 0:00:41.808 ****** 2025-09-19 11:27:51.411844 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.411855 | orchestrator | 2025-09-19 11:27:51.411866 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 11:27:51.411877 | orchestrator | Friday 19 September 2025 11:27:51 +0000 (0:00:00.124) 0:00:41.932 ****** 2025-09-19 11:27:51.411888 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.411899 | orchestrator | 2025-09-19 11:27:51.411910 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 11:27:51.411921 | orchestrator | Friday 19 September 2025 11:27:51 +0000 (0:00:00.135) 0:00:42.067 ****** 2025-09-19 11:27:51.411932 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.411943 | orchestrator | 2025-09-19 11:27:51.411953 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 11:27:51.411964 | orchestrator | Friday 19 September 2025 11:27:51 +0000 (0:00:00.117) 0:00:42.185 ****** 2025-09-19 11:27:51.411975 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:51.411986 | orchestrator | 2025-09-19 11:27:51.411997 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 11:27:51.412015 | orchestrator | Friday 19 September 2025 11:27:51 +0000 (0:00:00.146) 0:00:42.332 ****** 2025-09-19 11:27:56.107866 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.107973 | orchestrator | 2025-09-19 11:27:56.108013 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 11:27:56.108026 | orchestrator | Friday 19 September 2025 11:27:51 +0000 (0:00:00.146) 0:00:42.478 ****** 2025-09-19 11:27:56.108038 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.108049 | orchestrator | 2025-09-19 11:27:56.108060 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 11:27:56.108072 | orchestrator | Friday 19 September 2025 11:27:51 +0000 (0:00:00.326) 0:00:42.805 ****** 2025-09-19 11:27:56.108083 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.108094 | orchestrator | 2025-09-19 11:27:56.108105 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 11:27:56.108116 | orchestrator | Friday 19 September 2025 11:27:51 +0000 (0:00:00.119) 0:00:42.924 ****** 2025-09-19 11:27:56.108127 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.108138 | orchestrator | 2025-09-19 11:27:56.108148 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 11:27:56.108159 | orchestrator | Friday 19 September 2025 11:27:52 +0000 (0:00:00.131) 0:00:43.056 ****** 2025-09-19 11:27:56.108170 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.108181 | orchestrator | 2025-09-19 11:27:56.108192 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 11:27:56.108203 | orchestrator | Friday 19 September 2025 11:27:52 +0000 (0:00:00.137) 0:00:43.194 ****** 2025-09-19 11:27:56.108214 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.108225 | orchestrator | 2025-09-19 11:27:56.108235 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 11:27:56.108246 | orchestrator | Friday 19 September 2025 11:27:52 +0000 (0:00:00.118) 0:00:43.313 ****** 2025-09-19 11:27:56.108257 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.108268 | orchestrator | 2025-09-19 11:27:56.108279 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 11:27:56.108289 | orchestrator | Friday 19 September 2025 11:27:52 +0000 (0:00:00.131) 0:00:43.444 ****** 2025-09-19 11:27:56.108300 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.108311 | orchestrator | 2025-09-19 11:27:56.108322 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 11:27:56.108333 | orchestrator | Friday 19 September 2025 11:27:52 +0000 (0:00:00.128) 0:00:43.573 ****** 2025-09-19 11:27:56.108343 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.108354 | orchestrator | 2025-09-19 11:27:56.108365 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 11:27:56.108376 | orchestrator | Friday 19 September 2025 11:27:52 +0000 (0:00:00.137) 0:00:43.711 ****** 2025-09-19 11:27:56.108387 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.108400 | orchestrator | 2025-09-19 11:27:56.108412 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 11:27:56.108424 | orchestrator | Friday 19 September 2025 11:27:52 +0000 (0:00:00.138) 0:00:43.849 ****** 2025-09-19 11:27:56.108455 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:56.108470 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:56.108483 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.108496 | orchestrator | 2025-09-19 11:27:56.108531 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 11:27:56.108544 | orchestrator | Friday 19 September 2025 11:27:53 +0000 (0:00:00.139) 0:00:43.989 ****** 2025-09-19 11:27:56.108557 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:56.108570 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:56.108591 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.108603 | orchestrator | 2025-09-19 11:27:56.108615 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 11:27:56.108627 | orchestrator | Friday 19 September 2025 11:27:53 +0000 (0:00:00.145) 0:00:44.135 ****** 2025-09-19 11:27:56.108640 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:56.108652 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:56.108665 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.108677 | orchestrator | 2025-09-19 11:27:56.108689 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 11:27:56.108701 | orchestrator | Friday 19 September 2025 11:27:53 +0000 (0:00:00.134) 0:00:44.270 ****** 2025-09-19 11:27:56.108713 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:56.108726 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:56.108738 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.108750 | orchestrator | 2025-09-19 11:27:56.108762 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 11:27:56.108792 | orchestrator | Friday 19 September 2025 11:27:53 +0000 (0:00:00.274) 0:00:44.545 ****** 2025-09-19 11:27:56.108803 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:56.108814 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:56.108825 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.108836 | orchestrator | 2025-09-19 11:27:56.108847 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 11:27:56.108858 | orchestrator | Friday 19 September 2025 11:27:53 +0000 (0:00:00.149) 0:00:44.694 ****** 2025-09-19 11:27:56.108869 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:56.108880 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:56.108891 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.108902 | orchestrator | 2025-09-19 11:27:56.108913 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 11:27:56.108925 | orchestrator | Friday 19 September 2025 11:27:53 +0000 (0:00:00.132) 0:00:44.827 ****** 2025-09-19 11:27:56.108935 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:56.108947 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:56.108957 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.108968 | orchestrator | 2025-09-19 11:27:56.108979 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 11:27:56.108990 | orchestrator | Friday 19 September 2025 11:27:54 +0000 (0:00:00.150) 0:00:44.977 ****** 2025-09-19 11:27:56.109001 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:56.109020 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:56.109032 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.109043 | orchestrator | 2025-09-19 11:27:56.109054 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 11:27:56.109104 | orchestrator | Friday 19 September 2025 11:27:54 +0000 (0:00:00.150) 0:00:45.128 ****** 2025-09-19 11:27:56.109116 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:27:56.109127 | orchestrator | 2025-09-19 11:27:56.109139 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 11:27:56.109150 | orchestrator | Friday 19 September 2025 11:27:54 +0000 (0:00:00.641) 0:00:45.770 ****** 2025-09-19 11:27:56.109160 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:27:56.109171 | orchestrator | 2025-09-19 11:27:56.109182 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 11:27:56.109193 | orchestrator | Friday 19 September 2025 11:27:55 +0000 (0:00:00.553) 0:00:46.323 ****** 2025-09-19 11:27:56.109204 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:27:56.109214 | orchestrator | 2025-09-19 11:27:56.109226 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 11:27:56.109236 | orchestrator | Friday 19 September 2025 11:27:55 +0000 (0:00:00.185) 0:00:46.508 ****** 2025-09-19 11:27:56.109247 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'vg_name': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'}) 2025-09-19 11:27:56.109259 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'vg_name': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'}) 2025-09-19 11:27:56.109270 | orchestrator | 2025-09-19 11:27:56.109281 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 11:27:56.109292 | orchestrator | Friday 19 September 2025 11:27:55 +0000 (0:00:00.187) 0:00:46.696 ****** 2025-09-19 11:27:56.109303 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:56.109314 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:56.109325 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:56.109335 | orchestrator | 2025-09-19 11:27:56.109346 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 11:27:56.109357 | orchestrator | Friday 19 September 2025 11:27:55 +0000 (0:00:00.173) 0:00:46.870 ****** 2025-09-19 11:27:56.109368 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:27:56.109379 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:27:56.109397 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:02.458440 | orchestrator | 2025-09-19 11:28:02.458587 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 11:28:02.458606 | orchestrator | Friday 19 September 2025 11:27:56 +0000 (0:00:00.159) 0:00:47.029 ****** 2025-09-19 11:28:02.458619 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'})  2025-09-19 11:28:02.458631 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'})  2025-09-19 11:28:02.458643 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:02.458655 | orchestrator | 2025-09-19 11:28:02.458666 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 11:28:02.458678 | orchestrator | Friday 19 September 2025 11:27:56 +0000 (0:00:00.155) 0:00:47.184 ****** 2025-09-19 11:28:02.458707 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 11:28:02.458719 | orchestrator |  "lvm_report": { 2025-09-19 11:28:02.458731 | orchestrator |  "lv": [ 2025-09-19 11:28:02.458742 | orchestrator |  { 2025-09-19 11:28:02.458753 | orchestrator |  "lv_name": "osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7", 2025-09-19 11:28:02.458765 | orchestrator |  "vg_name": "ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7" 2025-09-19 11:28:02.458776 | orchestrator |  }, 2025-09-19 11:28:02.458787 | orchestrator |  { 2025-09-19 11:28:02.458806 | orchestrator |  "lv_name": "osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2", 2025-09-19 11:28:02.458825 | orchestrator |  "vg_name": "ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2" 2025-09-19 11:28:02.458843 | orchestrator |  } 2025-09-19 11:28:02.458861 | orchestrator |  ], 2025-09-19 11:28:02.458881 | orchestrator |  "pv": [ 2025-09-19 11:28:02.458900 | orchestrator |  { 2025-09-19 11:28:02.458920 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 11:28:02.458939 | orchestrator |  "vg_name": "ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7" 2025-09-19 11:28:02.458957 | orchestrator |  }, 2025-09-19 11:28:02.458975 | orchestrator |  { 2025-09-19 11:28:02.458993 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 11:28:02.459013 | orchestrator |  "vg_name": "ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2" 2025-09-19 11:28:02.459032 | orchestrator |  } 2025-09-19 11:28:02.459053 | orchestrator |  ] 2025-09-19 11:28:02.459073 | orchestrator |  } 2025-09-19 11:28:02.459092 | orchestrator | } 2025-09-19 11:28:02.459106 | orchestrator | 2025-09-19 11:28:02.459119 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 11:28:02.459131 | orchestrator | 2025-09-19 11:28:02.459144 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 11:28:02.459156 | orchestrator | Friday 19 September 2025 11:27:56 +0000 (0:00:00.488) 0:00:47.674 ****** 2025-09-19 11:28:02.459169 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 11:28:02.459182 | orchestrator | 2025-09-19 11:28:02.459207 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 11:28:02.459221 | orchestrator | Friday 19 September 2025 11:27:56 +0000 (0:00:00.251) 0:00:47.925 ****** 2025-09-19 11:28:02.459234 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:02.459247 | orchestrator | 2025-09-19 11:28:02.459260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:28:02.459272 | orchestrator | Friday 19 September 2025 11:27:57 +0000 (0:00:00.236) 0:00:48.162 ****** 2025-09-19 11:28:02.459284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-19 11:28:02.459297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-19 11:28:02.459309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-19 11:28:02.459322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-19 11:28:02.459334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-19 11:28:02.459347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-19 11:28:02.459357 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-19 11:28:02.459368 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-19 11:28:02.459378 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-19 11:28:02.459389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-19 11:28:02.459400 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-19 11:28:02.459420 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-19 11:28:02.459432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-19 11:28:02.459442 | orchestrator | 2025-09-19 11:28:02.459453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:28:02.459464 | orchestrator | Friday 19 September 2025 11:27:57 +0000 (0:00:00.438) 0:00:48.600 ****** 2025-09-19 11:28:02.459475 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:02.459489 | orchestrator | 2025-09-19 11:28:02.459527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:28:02.459539 | orchestrator | Friday 19 September 2025 11:27:57 +0000 (0:00:00.200) 0:00:48.801 ****** 2025-09-19 11:28:02.459550 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:02.459561 | orchestrator | 2025-09-19 11:28:02.459571 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:28:02.459601 | orchestrator | Friday 19 September 2025 11:27:58 +0000 (0:00:00.229) 0:00:49.030 ****** 2025-09-19 11:28:02.459613 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:02.459624 | orchestrator | 2025-09-19 11:28:02.459635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:28:02.459645 | orchestrator | Friday 19 September 2025 11:27:58 +0000 (0:00:00.223) 0:00:49.254 ****** 2025-09-19 11:28:02.459656 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:02.459667 | orchestrator | 2025-09-19 11:28:02.459678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:28:02.459689 | orchestrator | Friday 19 September 2025 11:27:58 +0000 (0:00:00.214) 0:00:49.468 ****** 2025-09-19 11:28:02.459700 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:02.459711 | orchestrator | 2025-09-19 11:28:02.459722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:28:02.459733 | orchestrator | Friday 19 September 2025 11:27:58 +0000 (0:00:00.202) 0:00:49.671 ****** 2025-09-19 11:28:02.459744 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:02.459754 | orchestrator | 2025-09-19 11:28:02.459765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:28:02.459776 | orchestrator | Friday 19 September 2025 11:27:59 +0000 (0:00:00.766) 0:00:50.438 ****** 2025-09-19 11:28:02.459787 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:02.459798 | orchestrator | 2025-09-19 11:28:02.459809 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:28:02.459820 | orchestrator | Friday 19 September 2025 11:27:59 +0000 (0:00:00.215) 0:00:50.654 ****** 2025-09-19 11:28:02.459831 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:02.459842 | orchestrator | 2025-09-19 11:28:02.459853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:28:02.459863 | orchestrator | Friday 19 September 2025 11:27:59 +0000 (0:00:00.203) 0:00:50.857 ****** 2025-09-19 11:28:02.459874 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba) 2025-09-19 11:28:02.459886 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba) 2025-09-19 11:28:02.459897 | orchestrator | 2025-09-19 11:28:02.459908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:28:02.459919 | orchestrator | Friday 19 September 2025 11:28:00 +0000 (0:00:00.439) 0:00:51.297 ****** 2025-09-19 11:28:02.459930 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_14764732-c430-42d5-be90-4134a981fa59) 2025-09-19 11:28:02.459940 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_14764732-c430-42d5-be90-4134a981fa59) 2025-09-19 11:28:02.459951 | orchestrator | 2025-09-19 11:28:02.459962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:28:02.459973 | orchestrator | Friday 19 September 2025 11:28:00 +0000 (0:00:00.438) 0:00:51.736 ****** 2025-09-19 11:28:02.459996 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_02d4d70c-9632-40cc-9453-c0d53d6148ed) 2025-09-19 11:28:02.460007 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_02d4d70c-9632-40cc-9453-c0d53d6148ed) 2025-09-19 11:28:02.460018 | orchestrator | 2025-09-19 11:28:02.460029 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:28:02.460040 | orchestrator | Friday 19 September 2025 11:28:01 +0000 (0:00:00.469) 0:00:52.205 ****** 2025-09-19 11:28:02.460050 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_29dd875d-2efb-4f11-ac43-6353645f7e36) 2025-09-19 11:28:02.460061 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_29dd875d-2efb-4f11-ac43-6353645f7e36) 2025-09-19 11:28:02.460072 | orchestrator | 2025-09-19 11:28:02.460083 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:28:02.460093 | orchestrator | Friday 19 September 2025 11:28:01 +0000 (0:00:00.411) 0:00:52.616 ****** 2025-09-19 11:28:02.460104 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 11:28:02.460115 | orchestrator | 2025-09-19 11:28:02.460126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:28:02.460136 | orchestrator | Friday 19 September 2025 11:28:01 +0000 (0:00:00.309) 0:00:52.925 ****** 2025-09-19 11:28:02.460147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-19 11:28:02.460158 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-19 11:28:02.460168 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-19 11:28:02.460179 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-19 11:28:02.460190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-19 11:28:02.460200 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-19 11:28:02.460211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-19 11:28:02.460221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-19 11:28:02.460232 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-19 11:28:02.460243 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-19 11:28:02.460253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-19 11:28:02.460270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-19 11:28:11.141989 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-19 11:28:11.142132 | orchestrator | 2025-09-19 11:28:11.142148 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:28:11.142159 | orchestrator | Friday 19 September 2025 11:28:02 +0000 (0:00:00.451) 0:00:53.377 ****** 2025-09-19 11:28:11.142169 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.142796 | orchestrator | 2025-09-19 11:28:11.142830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:28:11.142848 | orchestrator | Friday 19 September 2025 11:28:02 +0000 (0:00:00.161) 0:00:53.539 ****** 2025-09-19 11:28:11.142864 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.142882 | orchestrator | 2025-09-19 11:28:11.142901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:28:11.142918 | orchestrator | Friday 19 September 2025 11:28:02 +0000 (0:00:00.194) 0:00:53.733 ****** 2025-09-19 11:28:11.142930 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.142940 | orchestrator | 2025-09-19 11:28:11.142950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:28:11.142979 | orchestrator | Friday 19 September 2025 11:28:03 +0000 (0:00:00.512) 0:00:54.245 ****** 2025-09-19 11:28:11.142989 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.142999 | orchestrator | 2025-09-19 11:28:11.143008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:28:11.143018 | orchestrator | Friday 19 September 2025 11:28:03 +0000 (0:00:00.168) 0:00:54.414 ****** 2025-09-19 11:28:11.143027 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.143037 | orchestrator | 2025-09-19 11:28:11.143046 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:28:11.143056 | orchestrator | Friday 19 September 2025 11:28:03 +0000 (0:00:00.196) 0:00:54.610 ****** 2025-09-19 11:28:11.143065 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.143075 | orchestrator | 2025-09-19 11:28:11.143084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:28:11.143094 | orchestrator | Friday 19 September 2025 11:28:03 +0000 (0:00:00.194) 0:00:54.805 ****** 2025-09-19 11:28:11.143103 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.143113 | orchestrator | 2025-09-19 11:28:11.143122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:28:11.143132 | orchestrator | Friday 19 September 2025 11:28:04 +0000 (0:00:00.189) 0:00:54.994 ****** 2025-09-19 11:28:11.143141 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.143151 | orchestrator | 2025-09-19 11:28:11.143161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:28:11.143170 | orchestrator | Friday 19 September 2025 11:28:04 +0000 (0:00:00.210) 0:00:55.205 ****** 2025-09-19 11:28:11.143180 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-19 11:28:11.143190 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-19 11:28:11.143200 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-19 11:28:11.143209 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-19 11:28:11.143219 | orchestrator | 2025-09-19 11:28:11.143229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:28:11.143238 | orchestrator | Friday 19 September 2025 11:28:04 +0000 (0:00:00.604) 0:00:55.810 ****** 2025-09-19 11:28:11.143248 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.143257 | orchestrator | 2025-09-19 11:28:11.143267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:28:11.143276 | orchestrator | Friday 19 September 2025 11:28:05 +0000 (0:00:00.183) 0:00:55.993 ****** 2025-09-19 11:28:11.143286 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.143295 | orchestrator | 2025-09-19 11:28:11.143305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:28:11.143315 | orchestrator | Friday 19 September 2025 11:28:05 +0000 (0:00:00.189) 0:00:56.183 ****** 2025-09-19 11:28:11.143325 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.143334 | orchestrator | 2025-09-19 11:28:11.143343 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:28:11.143353 | orchestrator | Friday 19 September 2025 11:28:05 +0000 (0:00:00.196) 0:00:56.380 ****** 2025-09-19 11:28:11.143363 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.143372 | orchestrator | 2025-09-19 11:28:11.143382 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 11:28:11.143391 | orchestrator | Friday 19 September 2025 11:28:05 +0000 (0:00:00.187) 0:00:56.567 ****** 2025-09-19 11:28:11.143401 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.143410 | orchestrator | 2025-09-19 11:28:11.143420 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 11:28:11.143429 | orchestrator | Friday 19 September 2025 11:28:05 +0000 (0:00:00.289) 0:00:56.857 ****** 2025-09-19 11:28:11.143439 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9d0af248-3195-52cb-bed6-977ad9e4ee39'}}) 2025-09-19 11:28:11.143449 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6e702043-5e82-5f33-ad25-d539496f9fd1'}}) 2025-09-19 11:28:11.143464 | orchestrator | 2025-09-19 11:28:11.143474 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 11:28:11.143502 | orchestrator | Friday 19 September 2025 11:28:06 +0000 (0:00:00.180) 0:00:57.037 ****** 2025-09-19 11:28:11.143513 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'}) 2025-09-19 11:28:11.143524 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'}) 2025-09-19 11:28:11.143533 | orchestrator | 2025-09-19 11:28:11.143543 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 11:28:11.143571 | orchestrator | Friday 19 September 2025 11:28:08 +0000 (0:00:01.938) 0:00:58.975 ****** 2025-09-19 11:28:11.143581 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:11.143591 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:11.143601 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.143610 | orchestrator | 2025-09-19 11:28:11.143620 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 11:28:11.143629 | orchestrator | Friday 19 September 2025 11:28:08 +0000 (0:00:00.165) 0:00:59.141 ****** 2025-09-19 11:28:11.143639 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'}) 2025-09-19 11:28:11.143662 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'}) 2025-09-19 11:28:11.143673 | orchestrator | 2025-09-19 11:28:11.143683 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 11:28:11.143692 | orchestrator | Friday 19 September 2025 11:28:09 +0000 (0:00:01.373) 0:01:00.516 ****** 2025-09-19 11:28:11.143702 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:11.143712 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:11.143721 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.143731 | orchestrator | 2025-09-19 11:28:11.143740 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 11:28:11.143750 | orchestrator | Friday 19 September 2025 11:28:09 +0000 (0:00:00.186) 0:01:00.702 ****** 2025-09-19 11:28:11.143759 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.143769 | orchestrator | 2025-09-19 11:28:11.143778 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 11:28:11.143788 | orchestrator | Friday 19 September 2025 11:28:09 +0000 (0:00:00.149) 0:01:00.852 ****** 2025-09-19 11:28:11.143798 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:11.143812 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:11.143822 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.143840 | orchestrator | 2025-09-19 11:28:11.143856 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 11:28:11.143872 | orchestrator | Friday 19 September 2025 11:28:10 +0000 (0:00:00.172) 0:01:01.024 ****** 2025-09-19 11:28:11.143889 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.143918 | orchestrator | 2025-09-19 11:28:11.143953 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 11:28:11.143963 | orchestrator | Friday 19 September 2025 11:28:10 +0000 (0:00:00.143) 0:01:01.168 ****** 2025-09-19 11:28:11.143981 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:11.143991 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:11.144001 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.144011 | orchestrator | 2025-09-19 11:28:11.144020 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 11:28:11.144030 | orchestrator | Friday 19 September 2025 11:28:10 +0000 (0:00:00.169) 0:01:01.338 ****** 2025-09-19 11:28:11.144040 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.144049 | orchestrator | 2025-09-19 11:28:11.144059 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 11:28:11.144068 | orchestrator | Friday 19 September 2025 11:28:10 +0000 (0:00:00.147) 0:01:01.485 ****** 2025-09-19 11:28:11.144079 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:11.144097 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:11.144114 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:11.144130 | orchestrator | 2025-09-19 11:28:11.144149 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 11:28:11.144166 | orchestrator | Friday 19 September 2025 11:28:10 +0000 (0:00:00.137) 0:01:01.623 ****** 2025-09-19 11:28:11.144183 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:11.144200 | orchestrator | 2025-09-19 11:28:11.144217 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 11:28:11.144235 | orchestrator | Friday 19 September 2025 11:28:10 +0000 (0:00:00.282) 0:01:01.905 ****** 2025-09-19 11:28:11.144263 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:17.010007 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:17.010142 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.010157 | orchestrator | 2025-09-19 11:28:17.010170 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 11:28:17.010182 | orchestrator | Friday 19 September 2025 11:28:11 +0000 (0:00:00.160) 0:01:02.066 ****** 2025-09-19 11:28:17.010193 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:17.010205 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:17.010216 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.010227 | orchestrator | 2025-09-19 11:28:17.010238 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 11:28:17.010250 | orchestrator | Friday 19 September 2025 11:28:11 +0000 (0:00:00.154) 0:01:02.220 ****** 2025-09-19 11:28:17.010261 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:17.010272 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:17.010283 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.010314 | orchestrator | 2025-09-19 11:28:17.010326 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 11:28:17.010337 | orchestrator | Friday 19 September 2025 11:28:11 +0000 (0:00:00.137) 0:01:02.358 ****** 2025-09-19 11:28:17.010348 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.010358 | orchestrator | 2025-09-19 11:28:17.010369 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 11:28:17.010379 | orchestrator | Friday 19 September 2025 11:28:11 +0000 (0:00:00.134) 0:01:02.492 ****** 2025-09-19 11:28:17.010390 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.010401 | orchestrator | 2025-09-19 11:28:17.010412 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 11:28:17.010422 | orchestrator | Friday 19 September 2025 11:28:11 +0000 (0:00:00.142) 0:01:02.634 ****** 2025-09-19 11:28:17.010433 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.010443 | orchestrator | 2025-09-19 11:28:17.010454 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 11:28:17.010518 | orchestrator | Friday 19 September 2025 11:28:11 +0000 (0:00:00.136) 0:01:02.770 ****** 2025-09-19 11:28:17.010531 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 11:28:17.010543 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 11:28:17.010554 | orchestrator | } 2025-09-19 11:28:17.010567 | orchestrator | 2025-09-19 11:28:17.010579 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 11:28:17.010592 | orchestrator | Friday 19 September 2025 11:28:11 +0000 (0:00:00.149) 0:01:02.920 ****** 2025-09-19 11:28:17.010605 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 11:28:17.010617 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 11:28:17.010629 | orchestrator | } 2025-09-19 11:28:17.010642 | orchestrator | 2025-09-19 11:28:17.010654 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 11:28:17.010667 | orchestrator | Friday 19 September 2025 11:28:12 +0000 (0:00:00.139) 0:01:03.060 ****** 2025-09-19 11:28:17.010679 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 11:28:17.010692 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 11:28:17.010705 | orchestrator | } 2025-09-19 11:28:17.010717 | orchestrator | 2025-09-19 11:28:17.010730 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 11:28:17.010742 | orchestrator | Friday 19 September 2025 11:28:12 +0000 (0:00:00.140) 0:01:03.201 ****** 2025-09-19 11:28:17.010755 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:17.010767 | orchestrator | 2025-09-19 11:28:17.010779 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 11:28:17.010791 | orchestrator | Friday 19 September 2025 11:28:12 +0000 (0:00:00.486) 0:01:03.687 ****** 2025-09-19 11:28:17.010804 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:17.010816 | orchestrator | 2025-09-19 11:28:17.010829 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 11:28:17.010841 | orchestrator | Friday 19 September 2025 11:28:13 +0000 (0:00:00.577) 0:01:04.265 ****** 2025-09-19 11:28:17.010854 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:17.010867 | orchestrator | 2025-09-19 11:28:17.010880 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 11:28:17.010892 | orchestrator | Friday 19 September 2025 11:28:14 +0000 (0:00:00.663) 0:01:04.928 ****** 2025-09-19 11:28:17.010905 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:17.010918 | orchestrator | 2025-09-19 11:28:17.010929 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 11:28:17.010939 | orchestrator | Friday 19 September 2025 11:28:14 +0000 (0:00:00.140) 0:01:05.068 ****** 2025-09-19 11:28:17.010950 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.010961 | orchestrator | 2025-09-19 11:28:17.010971 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 11:28:17.010982 | orchestrator | Friday 19 September 2025 11:28:14 +0000 (0:00:00.110) 0:01:05.179 ****** 2025-09-19 11:28:17.011002 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.011013 | orchestrator | 2025-09-19 11:28:17.011024 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 11:28:17.011035 | orchestrator | Friday 19 September 2025 11:28:14 +0000 (0:00:00.101) 0:01:05.280 ****** 2025-09-19 11:28:17.011046 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 11:28:17.011057 | orchestrator |  "vgs_report": { 2025-09-19 11:28:17.011068 | orchestrator |  "vg": [] 2025-09-19 11:28:17.011095 | orchestrator |  } 2025-09-19 11:28:17.011107 | orchestrator | } 2025-09-19 11:28:17.011118 | orchestrator | 2025-09-19 11:28:17.011129 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 11:28:17.011140 | orchestrator | Friday 19 September 2025 11:28:14 +0000 (0:00:00.118) 0:01:05.399 ****** 2025-09-19 11:28:17.011151 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.011162 | orchestrator | 2025-09-19 11:28:17.011172 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 11:28:17.011183 | orchestrator | Friday 19 September 2025 11:28:14 +0000 (0:00:00.138) 0:01:05.538 ****** 2025-09-19 11:28:17.011194 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.011205 | orchestrator | 2025-09-19 11:28:17.011216 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 11:28:17.011227 | orchestrator | Friday 19 September 2025 11:28:14 +0000 (0:00:00.127) 0:01:05.665 ****** 2025-09-19 11:28:17.011237 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.011248 | orchestrator | 2025-09-19 11:28:17.011259 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 11:28:17.011269 | orchestrator | Friday 19 September 2025 11:28:14 +0000 (0:00:00.140) 0:01:05.806 ****** 2025-09-19 11:28:17.011280 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.011291 | orchestrator | 2025-09-19 11:28:17.011302 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 11:28:17.011313 | orchestrator | Friday 19 September 2025 11:28:15 +0000 (0:00:00.124) 0:01:05.931 ****** 2025-09-19 11:28:17.011323 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.011334 | orchestrator | 2025-09-19 11:28:17.011345 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 11:28:17.011356 | orchestrator | Friday 19 September 2025 11:28:15 +0000 (0:00:00.124) 0:01:06.055 ****** 2025-09-19 11:28:17.011367 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.011378 | orchestrator | 2025-09-19 11:28:17.011388 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 11:28:17.011399 | orchestrator | Friday 19 September 2025 11:28:15 +0000 (0:00:00.145) 0:01:06.201 ****** 2025-09-19 11:28:17.011410 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.011420 | orchestrator | 2025-09-19 11:28:17.011431 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 11:28:17.011442 | orchestrator | Friday 19 September 2025 11:28:15 +0000 (0:00:00.147) 0:01:06.349 ****** 2025-09-19 11:28:17.011453 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.011463 | orchestrator | 2025-09-19 11:28:17.011489 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 11:28:17.011501 | orchestrator | Friday 19 September 2025 11:28:15 +0000 (0:00:00.131) 0:01:06.480 ****** 2025-09-19 11:28:17.011512 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.011523 | orchestrator | 2025-09-19 11:28:17.011533 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 11:28:17.011549 | orchestrator | Friday 19 September 2025 11:28:15 +0000 (0:00:00.338) 0:01:06.818 ****** 2025-09-19 11:28:17.011560 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.011571 | orchestrator | 2025-09-19 11:28:17.011582 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 11:28:17.011593 | orchestrator | Friday 19 September 2025 11:28:16 +0000 (0:00:00.146) 0:01:06.965 ****** 2025-09-19 11:28:17.011603 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.011620 | orchestrator | 2025-09-19 11:28:17.011631 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 11:28:17.011642 | orchestrator | Friday 19 September 2025 11:28:16 +0000 (0:00:00.121) 0:01:07.086 ****** 2025-09-19 11:28:17.011653 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.011664 | orchestrator | 2025-09-19 11:28:17.011675 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 11:28:17.011685 | orchestrator | Friday 19 September 2025 11:28:16 +0000 (0:00:00.126) 0:01:07.213 ****** 2025-09-19 11:28:17.011696 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.011707 | orchestrator | 2025-09-19 11:28:17.011718 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 11:28:17.011729 | orchestrator | Friday 19 September 2025 11:28:16 +0000 (0:00:00.135) 0:01:07.349 ****** 2025-09-19 11:28:17.011739 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.011750 | orchestrator | 2025-09-19 11:28:17.011761 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 11:28:17.011772 | orchestrator | Friday 19 September 2025 11:28:16 +0000 (0:00:00.135) 0:01:07.484 ****** 2025-09-19 11:28:17.011783 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:17.011793 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:17.011804 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.011815 | orchestrator | 2025-09-19 11:28:17.011826 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 11:28:17.011837 | orchestrator | Friday 19 September 2025 11:28:16 +0000 (0:00:00.157) 0:01:07.642 ****** 2025-09-19 11:28:17.011847 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:17.011858 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:17.011869 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:17.011879 | orchestrator | 2025-09-19 11:28:17.011890 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 11:28:17.011901 | orchestrator | Friday 19 September 2025 11:28:16 +0000 (0:00:00.145) 0:01:07.788 ****** 2025-09-19 11:28:17.011919 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:19.953558 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:19.953662 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:19.953679 | orchestrator | 2025-09-19 11:28:19.953692 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 11:28:19.953705 | orchestrator | Friday 19 September 2025 11:28:17 +0000 (0:00:00.146) 0:01:07.934 ****** 2025-09-19 11:28:19.953716 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:19.953727 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:19.953738 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:19.953749 | orchestrator | 2025-09-19 11:28:19.953760 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 11:28:19.953771 | orchestrator | Friday 19 September 2025 11:28:17 +0000 (0:00:00.156) 0:01:08.091 ****** 2025-09-19 11:28:19.953782 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:19.953817 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:19.953829 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:19.953839 | orchestrator | 2025-09-19 11:28:19.953850 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 11:28:19.953861 | orchestrator | Friday 19 September 2025 11:28:17 +0000 (0:00:00.137) 0:01:08.229 ****** 2025-09-19 11:28:19.953872 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:19.953883 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:19.953894 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:19.953905 | orchestrator | 2025-09-19 11:28:19.953916 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 11:28:19.953927 | orchestrator | Friday 19 September 2025 11:28:17 +0000 (0:00:00.134) 0:01:08.363 ****** 2025-09-19 11:28:19.953937 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:19.953948 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:19.953959 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:19.953970 | orchestrator | 2025-09-19 11:28:19.953981 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 11:28:19.953992 | orchestrator | Friday 19 September 2025 11:28:17 +0000 (0:00:00.289) 0:01:08.653 ****** 2025-09-19 11:28:19.954003 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:19.954014 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:19.954085 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:19.954096 | orchestrator | 2025-09-19 11:28:19.954107 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 11:28:19.954117 | orchestrator | Friday 19 September 2025 11:28:17 +0000 (0:00:00.150) 0:01:08.803 ****** 2025-09-19 11:28:19.954128 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:19.954140 | orchestrator | 2025-09-19 11:28:19.954151 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 11:28:19.954162 | orchestrator | Friday 19 September 2025 11:28:18 +0000 (0:00:00.527) 0:01:09.330 ****** 2025-09-19 11:28:19.954172 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:19.954183 | orchestrator | 2025-09-19 11:28:19.954194 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 11:28:19.954205 | orchestrator | Friday 19 September 2025 11:28:18 +0000 (0:00:00.566) 0:01:09.897 ****** 2025-09-19 11:28:19.954215 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:19.954226 | orchestrator | 2025-09-19 11:28:19.954237 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 11:28:19.954247 | orchestrator | Friday 19 September 2025 11:28:19 +0000 (0:00:00.151) 0:01:10.049 ****** 2025-09-19 11:28:19.954258 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'vg_name': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'}) 2025-09-19 11:28:19.954270 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'vg_name': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'}) 2025-09-19 11:28:19.954281 | orchestrator | 2025-09-19 11:28:19.954292 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 11:28:19.954310 | orchestrator | Friday 19 September 2025 11:28:19 +0000 (0:00:00.170) 0:01:10.219 ****** 2025-09-19 11:28:19.954341 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:19.954353 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:19.954364 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:19.954375 | orchestrator | 2025-09-19 11:28:19.954386 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 11:28:19.954396 | orchestrator | Friday 19 September 2025 11:28:19 +0000 (0:00:00.168) 0:01:10.388 ****** 2025-09-19 11:28:19.954407 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:19.954418 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:19.954429 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:19.954440 | orchestrator | 2025-09-19 11:28:19.954451 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 11:28:19.954462 | orchestrator | Friday 19 September 2025 11:28:19 +0000 (0:00:00.158) 0:01:10.546 ****** 2025-09-19 11:28:19.954493 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'})  2025-09-19 11:28:19.954520 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'})  2025-09-19 11:28:19.954532 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:19.954543 | orchestrator | 2025-09-19 11:28:19.954554 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 11:28:19.954564 | orchestrator | Friday 19 September 2025 11:28:19 +0000 (0:00:00.160) 0:01:10.706 ****** 2025-09-19 11:28:19.954575 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 11:28:19.954586 | orchestrator |  "lvm_report": { 2025-09-19 11:28:19.954597 | orchestrator |  "lv": [ 2025-09-19 11:28:19.954608 | orchestrator |  { 2025-09-19 11:28:19.954619 | orchestrator |  "lv_name": "osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1", 2025-09-19 11:28:19.954636 | orchestrator |  "vg_name": "ceph-6e702043-5e82-5f33-ad25-d539496f9fd1" 2025-09-19 11:28:19.954647 | orchestrator |  }, 2025-09-19 11:28:19.954658 | orchestrator |  { 2025-09-19 11:28:19.954668 | orchestrator |  "lv_name": "osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39", 2025-09-19 11:28:19.954679 | orchestrator |  "vg_name": "ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39" 2025-09-19 11:28:19.954690 | orchestrator |  } 2025-09-19 11:28:19.954700 | orchestrator |  ], 2025-09-19 11:28:19.954711 | orchestrator |  "pv": [ 2025-09-19 11:28:19.954722 | orchestrator |  { 2025-09-19 11:28:19.954732 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 11:28:19.954743 | orchestrator |  "vg_name": "ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39" 2025-09-19 11:28:19.954754 | orchestrator |  }, 2025-09-19 11:28:19.954764 | orchestrator |  { 2025-09-19 11:28:19.954775 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 11:28:19.954786 | orchestrator |  "vg_name": "ceph-6e702043-5e82-5f33-ad25-d539496f9fd1" 2025-09-19 11:28:19.954797 | orchestrator |  } 2025-09-19 11:28:19.954807 | orchestrator |  ] 2025-09-19 11:28:19.954818 | orchestrator |  } 2025-09-19 11:28:19.954829 | orchestrator | } 2025-09-19 11:28:19.954840 | orchestrator | 2025-09-19 11:28:19.954850 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:28:19.954869 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 11:28:19.954880 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 11:28:19.954891 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 11:28:19.954902 | orchestrator | 2025-09-19 11:28:19.954913 | orchestrator | 2025-09-19 11:28:19.954923 | orchestrator | 2025-09-19 11:28:19.954934 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:28:19.954945 | orchestrator | Friday 19 September 2025 11:28:19 +0000 (0:00:00.147) 0:01:10.854 ****** 2025-09-19 11:28:19.954955 | orchestrator | =============================================================================== 2025-09-19 11:28:19.954966 | orchestrator | Create block VGs -------------------------------------------------------- 6.00s 2025-09-19 11:28:19.954976 | orchestrator | Create block LVs -------------------------------------------------------- 4.27s 2025-09-19 11:28:19.954987 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.77s 2025-09-19 11:28:19.954998 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.74s 2025-09-19 11:28:19.955008 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.73s 2025-09-19 11:28:19.955019 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.66s 2025-09-19 11:28:19.955029 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.66s 2025-09-19 11:28:19.955040 | orchestrator | Add known partitions to the list of available block devices ------------- 1.43s 2025-09-19 11:28:19.955058 | orchestrator | Add known links to the list of available block devices ------------------ 1.30s 2025-09-19 11:28:20.363964 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2025-09-19 11:28:20.364091 | orchestrator | Print LVM report data --------------------------------------------------- 0.93s 2025-09-19 11:28:20.364113 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.81s 2025-09-19 11:28:20.364131 | orchestrator | Add known links to the list of available block devices ------------------ 0.77s 2025-09-19 11:28:20.364150 | orchestrator | Add known links to the list of available block devices ------------------ 0.77s 2025-09-19 11:28:20.364168 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2025-09-19 11:28:20.364186 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.73s 2025-09-19 11:28:20.364205 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.69s 2025-09-19 11:28:20.364224 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2025-09-19 11:28:20.364241 | orchestrator | Print size needed for LVs on ceph_db_devices ---------------------------- 0.66s 2025-09-19 11:28:20.364259 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-09-19 11:28:32.773228 | orchestrator | 2025-09-19 11:28:32 | INFO  | Task 9c41f79b-f25b-4c9a-b63b-f4d2e8036b11 (facts) was prepared for execution. 2025-09-19 11:28:32.773296 | orchestrator | 2025-09-19 11:28:32 | INFO  | It takes a moment until task 9c41f79b-f25b-4c9a-b63b-f4d2e8036b11 (facts) has been started and output is visible here. 2025-09-19 11:28:44.098813 | orchestrator | 2025-09-19 11:28:44.098872 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-19 11:28:44.098880 | orchestrator | 2025-09-19 11:28:44.098886 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 11:28:44.098891 | orchestrator | Friday 19 September 2025 11:28:36 +0000 (0:00:00.244) 0:00:00.244 ****** 2025-09-19 11:28:44.098897 | orchestrator | ok: [testbed-manager] 2025-09-19 11:28:44.098903 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:44.098923 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:44.098929 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:44.098934 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:28:44.098939 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:28:44.098944 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:44.098949 | orchestrator | 2025-09-19 11:28:44.098954 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 11:28:44.098960 | orchestrator | Friday 19 September 2025 11:28:37 +0000 (0:00:01.062) 0:00:01.306 ****** 2025-09-19 11:28:44.098972 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:28:44.098978 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:44.098983 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:44.098989 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:44.098994 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:44.098999 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:44.099004 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:44.099009 | orchestrator | 2025-09-19 11:28:44.099014 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 11:28:44.099019 | orchestrator | 2025-09-19 11:28:44.099024 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 11:28:44.099030 | orchestrator | Friday 19 September 2025 11:28:38 +0000 (0:00:01.109) 0:00:02.416 ****** 2025-09-19 11:28:44.099035 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:44.099040 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:44.099045 | orchestrator | ok: [testbed-manager] 2025-09-19 11:28:44.099050 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:44.099055 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:44.099060 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:28:44.099065 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:28:44.099070 | orchestrator | 2025-09-19 11:28:44.099075 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 11:28:44.099080 | orchestrator | 2025-09-19 11:28:44.099086 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 11:28:44.099091 | orchestrator | Friday 19 September 2025 11:28:43 +0000 (0:00:04.760) 0:00:07.176 ****** 2025-09-19 11:28:44.099096 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:28:44.099101 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:44.099106 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:44.099111 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:44.099117 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:44.099122 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:44.099127 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:44.099132 | orchestrator | 2025-09-19 11:28:44.099137 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:28:44.099142 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:28:44.099148 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:28:44.099153 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:28:44.099158 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:28:44.099163 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:28:44.099169 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:28:44.099174 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:28:44.099184 | orchestrator | 2025-09-19 11:28:44.099189 | orchestrator | 2025-09-19 11:28:44.099194 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:28:44.099199 | orchestrator | Friday 19 September 2025 11:28:43 +0000 (0:00:00.459) 0:00:07.636 ****** 2025-09-19 11:28:44.099204 | orchestrator | =============================================================================== 2025-09-19 11:28:44.099209 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.76s 2025-09-19 11:28:44.099215 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.11s 2025-09-19 11:28:44.099220 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.06s 2025-09-19 11:28:44.099225 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2025-09-19 11:28:56.187598 | orchestrator | 2025-09-19 11:28:56 | INFO  | Task e2647b93-bc31-4c1d-8b87-63140074967e (frr) was prepared for execution. 2025-09-19 11:28:56.187713 | orchestrator | 2025-09-19 11:28:56 | INFO  | It takes a moment until task e2647b93-bc31-4c1d-8b87-63140074967e (frr) has been started and output is visible here. 2025-09-19 11:29:21.250492 | orchestrator | 2025-09-19 11:29:21.250604 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-19 11:29:21.250621 | orchestrator | 2025-09-19 11:29:21.250634 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-19 11:29:21.250646 | orchestrator | Friday 19 September 2025 11:29:00 +0000 (0:00:00.203) 0:00:00.203 ****** 2025-09-19 11:29:21.250658 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 11:29:21.250670 | orchestrator | 2025-09-19 11:29:21.250681 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-19 11:29:21.250692 | orchestrator | Friday 19 September 2025 11:29:00 +0000 (0:00:00.234) 0:00:00.438 ****** 2025-09-19 11:29:21.250703 | orchestrator | changed: [testbed-manager] 2025-09-19 11:29:21.250715 | orchestrator | 2025-09-19 11:29:21.250726 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-19 11:29:21.250737 | orchestrator | Friday 19 September 2025 11:29:01 +0000 (0:00:01.053) 0:00:01.492 ****** 2025-09-19 11:29:21.250748 | orchestrator | changed: [testbed-manager] 2025-09-19 11:29:21.250759 | orchestrator | 2025-09-19 11:29:21.250786 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-19 11:29:21.250798 | orchestrator | Friday 19 September 2025 11:29:10 +0000 (0:00:09.133) 0:00:10.625 ****** 2025-09-19 11:29:21.250809 | orchestrator | ok: [testbed-manager] 2025-09-19 11:29:21.250820 | orchestrator | 2025-09-19 11:29:21.250831 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-19 11:29:21.250842 | orchestrator | Friday 19 September 2025 11:29:11 +0000 (0:00:01.324) 0:00:11.950 ****** 2025-09-19 11:29:21.250853 | orchestrator | changed: [testbed-manager] 2025-09-19 11:29:21.250864 | orchestrator | 2025-09-19 11:29:21.250874 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-19 11:29:21.250885 | orchestrator | Friday 19 September 2025 11:29:12 +0000 (0:00:00.976) 0:00:12.926 ****** 2025-09-19 11:29:21.250896 | orchestrator | ok: [testbed-manager] 2025-09-19 11:29:21.250907 | orchestrator | 2025-09-19 11:29:21.250918 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-19 11:29:21.250929 | orchestrator | Friday 19 September 2025 11:29:14 +0000 (0:00:01.179) 0:00:14.105 ****** 2025-09-19 11:29:21.250940 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:29:21.250950 | orchestrator | 2025-09-19 11:29:21.250961 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-19 11:29:21.250973 | orchestrator | Friday 19 September 2025 11:29:14 +0000 (0:00:00.817) 0:00:14.923 ****** 2025-09-19 11:29:21.251030 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:29:21.251043 | orchestrator | 2025-09-19 11:29:21.251056 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-19 11:29:21.251095 | orchestrator | Friday 19 September 2025 11:29:14 +0000 (0:00:00.171) 0:00:15.094 ****** 2025-09-19 11:29:21.251108 | orchestrator | changed: [testbed-manager] 2025-09-19 11:29:21.251120 | orchestrator | 2025-09-19 11:29:21.251132 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-19 11:29:21.251144 | orchestrator | Friday 19 September 2025 11:29:15 +0000 (0:00:00.967) 0:00:16.062 ****** 2025-09-19 11:29:21.251157 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-19 11:29:21.251169 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-19 11:29:21.251180 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-19 11:29:21.251191 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-19 11:29:21.251203 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-19 11:29:21.251213 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-19 11:29:21.251224 | orchestrator | 2025-09-19 11:29:21.251235 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-19 11:29:21.251246 | orchestrator | Friday 19 September 2025 11:29:18 +0000 (0:00:02.204) 0:00:18.266 ****** 2025-09-19 11:29:21.251257 | orchestrator | ok: [testbed-manager] 2025-09-19 11:29:21.251268 | orchestrator | 2025-09-19 11:29:21.251278 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-19 11:29:21.251295 | orchestrator | Friday 19 September 2025 11:29:19 +0000 (0:00:01.385) 0:00:19.652 ****** 2025-09-19 11:29:21.251313 | orchestrator | changed: [testbed-manager] 2025-09-19 11:29:21.251331 | orchestrator | 2025-09-19 11:29:21.251349 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:29:21.251400 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:29:21.251417 | orchestrator | 2025-09-19 11:29:21.251434 | orchestrator | 2025-09-19 11:29:21.251452 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:29:21.251469 | orchestrator | Friday 19 September 2025 11:29:20 +0000 (0:00:01.422) 0:00:21.075 ****** 2025-09-19 11:29:21.251486 | orchestrator | =============================================================================== 2025-09-19 11:29:21.251505 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.13s 2025-09-19 11:29:21.251523 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.20s 2025-09-19 11:29:21.251541 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.42s 2025-09-19 11:29:21.251558 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.39s 2025-09-19 11:29:21.251601 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.32s 2025-09-19 11:29:21.251620 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.18s 2025-09-19 11:29:21.251631 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.05s 2025-09-19 11:29:21.251641 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.98s 2025-09-19 11:29:21.251652 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.97s 2025-09-19 11:29:21.251663 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.82s 2025-09-19 11:29:21.251673 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2025-09-19 11:29:21.251684 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.17s 2025-09-19 11:29:21.532566 | orchestrator | 2025-09-19 11:29:21.534820 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Sep 19 11:29:21 UTC 2025 2025-09-19 11:29:21.534866 | orchestrator | 2025-09-19 11:29:23.462630 | orchestrator | 2025-09-19 11:29:23 | INFO  | Collection nutshell is prepared for execution 2025-09-19 11:29:23.462726 | orchestrator | 2025-09-19 11:29:23 | INFO  | D [0] - dotfiles 2025-09-19 11:29:33.495874 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [0] - homer 2025-09-19 11:29:33.496053 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [0] - netdata 2025-09-19 11:29:33.496083 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [0] - openstackclient 2025-09-19 11:29:33.496284 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [0] - phpmyadmin 2025-09-19 11:29:33.496900 | orchestrator | 2025-09-19 11:29:33 | INFO  | A [0] - common 2025-09-19 11:29:33.501591 | orchestrator | 2025-09-19 11:29:33 | INFO  | A [1] -- loadbalancer 2025-09-19 11:29:33.501667 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [2] --- opensearch 2025-09-19 11:29:33.501687 | orchestrator | 2025-09-19 11:29:33 | INFO  | A [2] --- mariadb-ng 2025-09-19 11:29:33.502083 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [3] ---- horizon 2025-09-19 11:29:33.502288 | orchestrator | 2025-09-19 11:29:33 | INFO  | A [3] ---- keystone 2025-09-19 11:29:33.502304 | orchestrator | 2025-09-19 11:29:33 | INFO  | A [4] ----- neutron 2025-09-19 11:29:33.502802 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [5] ------ wait-for-nova 2025-09-19 11:29:33.502833 | orchestrator | 2025-09-19 11:29:33 | INFO  | A [5] ------ octavia 2025-09-19 11:29:33.504258 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [4] ----- barbican 2025-09-19 11:29:33.504284 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [4] ----- designate 2025-09-19 11:29:33.504296 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [4] ----- ironic 2025-09-19 11:29:33.504604 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [4] ----- placement 2025-09-19 11:29:33.504630 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [4] ----- magnum 2025-09-19 11:29:33.506645 | orchestrator | 2025-09-19 11:29:33 | INFO  | A [1] -- openvswitch 2025-09-19 11:29:33.506694 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [2] --- ovn 2025-09-19 11:29:33.506702 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [1] -- memcached 2025-09-19 11:29:33.506709 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [1] -- redis 2025-09-19 11:29:33.506716 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [1] -- rabbitmq-ng 2025-09-19 11:29:33.506723 | orchestrator | 2025-09-19 11:29:33 | INFO  | A [0] - kubernetes 2025-09-19 11:29:33.511119 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [1] -- kubeconfig 2025-09-19 11:29:33.511162 | orchestrator | 2025-09-19 11:29:33 | INFO  | A [1] -- copy-kubeconfig 2025-09-19 11:29:33.511172 | orchestrator | 2025-09-19 11:29:33 | INFO  | A [0] - ceph 2025-09-19 11:29:33.512758 | orchestrator | 2025-09-19 11:29:33 | INFO  | A [1] -- ceph-pools 2025-09-19 11:29:33.512899 | orchestrator | 2025-09-19 11:29:33 | INFO  | A [2] --- copy-ceph-keys 2025-09-19 11:29:33.512915 | orchestrator | 2025-09-19 11:29:33 | INFO  | A [3] ---- cephclient 2025-09-19 11:29:33.512927 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-19 11:29:33.512948 | orchestrator | 2025-09-19 11:29:33 | INFO  | A [4] ----- wait-for-keystone 2025-09-19 11:29:33.513120 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-19 11:29:33.513139 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [5] ------ glance 2025-09-19 11:29:33.513154 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [5] ------ cinder 2025-09-19 11:29:33.513290 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [5] ------ nova 2025-09-19 11:29:33.513894 | orchestrator | 2025-09-19 11:29:33 | INFO  | A [4] ----- prometheus 2025-09-19 11:29:33.513918 | orchestrator | 2025-09-19 11:29:33 | INFO  | D [5] ------ grafana 2025-09-19 11:29:33.713276 | orchestrator | 2025-09-19 11:29:33 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-19 11:29:33.713419 | orchestrator | 2025-09-19 11:29:33 | INFO  | Tasks are running in the background 2025-09-19 11:29:36.647745 | orchestrator | 2025-09-19 11:29:36 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-19 11:29:38.776580 | orchestrator | 2025-09-19 11:29:38 | INFO  | Task df9b5266-741b-4818-be6a-db5a24b1c6b1 is in state STARTED 2025-09-19 11:29:38.779366 | orchestrator | 2025-09-19 11:29:38 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:29:38.779660 | orchestrator | 2025-09-19 11:29:38 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:29:38.780211 | orchestrator | 2025-09-19 11:29:38 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:29:38.780742 | orchestrator | 2025-09-19 11:29:38 | INFO  | Task 4f133ac2-a9c0-47ea-9192-cc216da0888c is in state STARTED 2025-09-19 11:29:38.781256 | orchestrator | 2025-09-19 11:29:38 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state STARTED 2025-09-19 11:29:38.781812 | orchestrator | 2025-09-19 11:29:38 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:29:38.781846 | orchestrator | 2025-09-19 11:29:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:41.818238 | orchestrator | 2025-09-19 11:29:41 | INFO  | Task df9b5266-741b-4818-be6a-db5a24b1c6b1 is in state STARTED 2025-09-19 11:29:41.820503 | orchestrator | 2025-09-19 11:29:41 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:29:41.822873 | orchestrator | 2025-09-19 11:29:41 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:29:41.824518 | orchestrator | 2025-09-19 11:29:41 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:29:41.826999 | orchestrator | 2025-09-19 11:29:41 | INFO  | Task 4f133ac2-a9c0-47ea-9192-cc216da0888c is in state STARTED 2025-09-19 11:29:41.830286 | orchestrator | 2025-09-19 11:29:41 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state STARTED 2025-09-19 11:29:41.831809 | orchestrator | 2025-09-19 11:29:41 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:29:41.831834 | orchestrator | 2025-09-19 11:29:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:44.904554 | orchestrator | 2025-09-19 11:29:44 | INFO  | Task df9b5266-741b-4818-be6a-db5a24b1c6b1 is in state STARTED 2025-09-19 11:29:44.906626 | orchestrator | 2025-09-19 11:29:44 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:29:44.908555 | orchestrator | 2025-09-19 11:29:44 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:29:44.909102 | orchestrator | 2025-09-19 11:29:44 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:29:44.911701 | orchestrator | 2025-09-19 11:29:44 | INFO  | Task 4f133ac2-a9c0-47ea-9192-cc216da0888c is in state STARTED 2025-09-19 11:29:44.913828 | orchestrator | 2025-09-19 11:29:44 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state STARTED 2025-09-19 11:29:44.918672 | orchestrator | 2025-09-19 11:29:44 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:29:44.918717 | orchestrator | 2025-09-19 11:29:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:47.991539 | orchestrator | 2025-09-19 11:29:47 | INFO  | Task df9b5266-741b-4818-be6a-db5a24b1c6b1 is in state STARTED 2025-09-19 11:29:47.996192 | orchestrator | 2025-09-19 11:29:47 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:29:47.998652 | orchestrator | 2025-09-19 11:29:48 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:29:47.999198 | orchestrator | 2025-09-19 11:29:48 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:29:47.999769 | orchestrator | 2025-09-19 11:29:48 | INFO  | Task 4f133ac2-a9c0-47ea-9192-cc216da0888c is in state STARTED 2025-09-19 11:29:48.000543 | orchestrator | 2025-09-19 11:29:48 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state STARTED 2025-09-19 11:29:48.001169 | orchestrator | 2025-09-19 11:29:48 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:29:48.001196 | orchestrator | 2025-09-19 11:29:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:51.179210 | orchestrator | 2025-09-19 11:29:51 | INFO  | Task df9b5266-741b-4818-be6a-db5a24b1c6b1 is in state STARTED 2025-09-19 11:29:51.179285 | orchestrator | 2025-09-19 11:29:51 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:29:51.179297 | orchestrator | 2025-09-19 11:29:51 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:29:51.179343 | orchestrator | 2025-09-19 11:29:51 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:29:51.179355 | orchestrator | 2025-09-19 11:29:51 | INFO  | Task 4f133ac2-a9c0-47ea-9192-cc216da0888c is in state STARTED 2025-09-19 11:29:51.179364 | orchestrator | 2025-09-19 11:29:51 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state STARTED 2025-09-19 11:29:51.179373 | orchestrator | 2025-09-19 11:29:51 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:29:51.179381 | orchestrator | 2025-09-19 11:29:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:54.272155 | orchestrator | 2025-09-19 11:29:54 | INFO  | Task df9b5266-741b-4818-be6a-db5a24b1c6b1 is in state STARTED 2025-09-19 11:29:54.272232 | orchestrator | 2025-09-19 11:29:54 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:29:54.273208 | orchestrator | 2025-09-19 11:29:54 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:29:54.278250 | orchestrator | 2025-09-19 11:29:54 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:29:54.283014 | orchestrator | 2025-09-19 11:29:54 | INFO  | Task 4f133ac2-a9c0-47ea-9192-cc216da0888c is in state STARTED 2025-09-19 11:29:54.283057 | orchestrator | 2025-09-19 11:29:54 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state STARTED 2025-09-19 11:29:54.287430 | orchestrator | 2025-09-19 11:29:54 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:29:54.287455 | orchestrator | 2025-09-19 11:29:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:57.418445 | orchestrator | 2025-09-19 11:29:57 | INFO  | Task df9b5266-741b-4818-be6a-db5a24b1c6b1 is in state STARTED 2025-09-19 11:29:57.421721 | orchestrator | 2025-09-19 11:29:57 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:29:57.422007 | orchestrator | 2025-09-19 11:29:57 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:29:57.423371 | orchestrator | 2025-09-19 11:29:57 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:29:57.427451 | orchestrator | 2025-09-19 11:29:57 | INFO  | Task 4f133ac2-a9c0-47ea-9192-cc216da0888c is in state STARTED 2025-09-19 11:29:57.427773 | orchestrator | 2025-09-19 11:29:57 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state STARTED 2025-09-19 11:29:57.428347 | orchestrator | 2025-09-19 11:29:57 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:29:57.428372 | orchestrator | 2025-09-19 11:29:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:00.552556 | orchestrator | 2025-09-19 11:30:00.552629 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-19 11:30:00.552642 | orchestrator | 2025-09-19 11:30:00.552651 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-19 11:30:00.552660 | orchestrator | Friday 19 September 2025 11:29:46 +0000 (0:00:00.447) 0:00:00.447 ****** 2025-09-19 11:30:00.552669 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:00.552679 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:30:00.552687 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:30:00.552696 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:30:00.552704 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:30:00.552713 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:30:00.552721 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:30:00.552730 | orchestrator | 2025-09-19 11:30:00.552738 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-19 11:30:00.552747 | orchestrator | Friday 19 September 2025 11:29:51 +0000 (0:00:04.439) 0:00:04.887 ****** 2025-09-19 11:30:00.552755 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-19 11:30:00.552764 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 11:30:00.552773 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 11:30:00.552781 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 11:30:00.552790 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 11:30:00.552798 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 11:30:00.552807 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 11:30:00.552815 | orchestrator | 2025-09-19 11:30:00.552824 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-19 11:30:00.552833 | orchestrator | Friday 19 September 2025 11:29:52 +0000 (0:00:01.388) 0:00:06.276 ****** 2025-09-19 11:30:00.552845 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 11:29:52.082835', 'end': '2025-09-19 11:29:52.086681', 'delta': '0:00:00.003846', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 11:30:00.552868 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 11:29:52.169721', 'end': '2025-09-19 11:29:52.179847', 'delta': '0:00:00.010126', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 11:30:00.552893 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 11:29:52.154934', 'end': '2025-09-19 11:29:52.164343', 'delta': '0:00:00.009409', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 11:30:00.552923 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 11:29:52.173159', 'end': '2025-09-19 11:29:52.182349', 'delta': '0:00:00.009190', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 11:30:00.552933 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 11:29:52.199834', 'end': '2025-09-19 11:29:52.209261', 'delta': '0:00:00.009427', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 11:30:00.552942 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 11:29:52.140424', 'end': '2025-09-19 11:29:52.151193', 'delta': '0:00:00.010769', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 11:30:00.552955 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 11:29:52.153256', 'end': '2025-09-19 11:29:52.159238', 'delta': '0:00:00.005982', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 11:30:00.552974 | orchestrator | 2025-09-19 11:30:00.552983 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-19 11:30:00.552992 | orchestrator | Friday 19 September 2025 11:29:54 +0000 (0:00:01.859) 0:00:08.135 ****** 2025-09-19 11:30:00.553000 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-19 11:30:00.553010 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 11:30:00.553018 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 11:30:00.553027 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 11:30:00.553036 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 11:30:00.553044 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 11:30:00.553053 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 11:30:00.553061 | orchestrator | 2025-09-19 11:30:00.553070 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-19 11:30:00.553079 | orchestrator | Friday 19 September 2025 11:29:55 +0000 (0:00:01.597) 0:00:09.734 ****** 2025-09-19 11:30:00.553087 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-19 11:30:00.553096 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 11:30:00.553105 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 11:30:00.553115 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 11:30:00.553126 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 11:30:00.553136 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 11:30:00.553146 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 11:30:00.553156 | orchestrator | 2025-09-19 11:30:00.553165 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:30:00.553181 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:00.553192 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:00.553202 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:00.553213 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:00.553223 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:00.553233 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:00.553242 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:00.553252 | orchestrator | 2025-09-19 11:30:00.553262 | orchestrator | 2025-09-19 11:30:00.553272 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:30:00.553282 | orchestrator | Friday 19 September 2025 11:29:58 +0000 (0:00:02.438) 0:00:12.172 ****** 2025-09-19 11:30:00.553311 | orchestrator | =============================================================================== 2025-09-19 11:30:00.553320 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.44s 2025-09-19 11:30:00.553328 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.44s 2025-09-19 11:30:00.553343 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.86s 2025-09-19 11:30:00.553351 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.60s 2025-09-19 11:30:00.553360 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.39s 2025-09-19 11:30:00.553369 | orchestrator | 2025-09-19 11:30:00 | INFO  | Task df9b5266-741b-4818-be6a-db5a24b1c6b1 is in state SUCCESS 2025-09-19 11:30:00.553378 | orchestrator | 2025-09-19 11:30:00 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:00.553387 | orchestrator | 2025-09-19 11:30:00 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:30:00.553395 | orchestrator | 2025-09-19 11:30:00 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:00.553404 | orchestrator | 2025-09-19 11:30:00 | INFO  | Task 4f133ac2-a9c0-47ea-9192-cc216da0888c is in state STARTED 2025-09-19 11:30:00.553416 | orchestrator | 2025-09-19 11:30:00 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state STARTED 2025-09-19 11:30:00.553425 | orchestrator | 2025-09-19 11:30:00 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:00.553434 | orchestrator | 2025-09-19 11:30:00 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:00.553443 | orchestrator | 2025-09-19 11:30:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:03.754345 | orchestrator | 2025-09-19 11:30:03 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:03.754434 | orchestrator | 2025-09-19 11:30:03 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:30:03.754456 | orchestrator | 2025-09-19 11:30:03 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:03.754476 | orchestrator | 2025-09-19 11:30:03 | INFO  | Task 4f133ac2-a9c0-47ea-9192-cc216da0888c is in state STARTED 2025-09-19 11:30:03.754495 | orchestrator | 2025-09-19 11:30:03 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state STARTED 2025-09-19 11:30:03.754513 | orchestrator | 2025-09-19 11:30:03 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:03.754531 | orchestrator | 2025-09-19 11:30:03 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:03.754546 | orchestrator | 2025-09-19 11:30:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:06.733716 | orchestrator | 2025-09-19 11:30:06 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:06.733801 | orchestrator | 2025-09-19 11:30:06 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:30:06.734308 | orchestrator | 2025-09-19 11:30:06 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:06.737540 | orchestrator | 2025-09-19 11:30:06 | INFO  | Task 4f133ac2-a9c0-47ea-9192-cc216da0888c is in state STARTED 2025-09-19 11:30:06.737928 | orchestrator | 2025-09-19 11:30:06 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state STARTED 2025-09-19 11:30:06.738612 | orchestrator | 2025-09-19 11:30:06 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:06.739171 | orchestrator | 2025-09-19 11:30:06 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:06.739211 | orchestrator | 2025-09-19 11:30:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:09.805774 | orchestrator | 2025-09-19 11:30:09 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:09.807444 | orchestrator | 2025-09-19 11:30:09 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:30:09.809313 | orchestrator | 2025-09-19 11:30:09 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:09.812150 | orchestrator | 2025-09-19 11:30:09 | INFO  | Task 4f133ac2-a9c0-47ea-9192-cc216da0888c is in state STARTED 2025-09-19 11:30:09.813927 | orchestrator | 2025-09-19 11:30:09 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state STARTED 2025-09-19 11:30:09.815042 | orchestrator | 2025-09-19 11:30:09 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:09.815853 | orchestrator | 2025-09-19 11:30:09 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:09.815887 | orchestrator | 2025-09-19 11:30:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:12.892610 | orchestrator | 2025-09-19 11:30:12 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:12.893170 | orchestrator | 2025-09-19 11:30:12 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:30:12.894159 | orchestrator | 2025-09-19 11:30:12 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:12.895320 | orchestrator | 2025-09-19 11:30:12 | INFO  | Task 4f133ac2-a9c0-47ea-9192-cc216da0888c is in state STARTED 2025-09-19 11:30:12.896517 | orchestrator | 2025-09-19 11:30:12 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state STARTED 2025-09-19 11:30:12.898456 | orchestrator | 2025-09-19 11:30:12 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:12.899763 | orchestrator | 2025-09-19 11:30:12 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:12.899790 | orchestrator | 2025-09-19 11:30:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:15.960685 | orchestrator | 2025-09-19 11:30:15 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:15.964137 | orchestrator | 2025-09-19 11:30:15 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:30:15.967786 | orchestrator | 2025-09-19 11:30:15 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:15.971098 | orchestrator | 2025-09-19 11:30:15 | INFO  | Task 4f133ac2-a9c0-47ea-9192-cc216da0888c is in state STARTED 2025-09-19 11:30:16.031700 | orchestrator | 2025-09-19 11:30:16 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state STARTED 2025-09-19 11:30:16.031761 | orchestrator | 2025-09-19 11:30:16 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:16.034938 | orchestrator | 2025-09-19 11:30:16 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:16.037773 | orchestrator | 2025-09-19 11:30:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:19.072223 | orchestrator | 2025-09-19 11:30:19 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:19.072337 | orchestrator | 2025-09-19 11:30:19 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:30:19.072349 | orchestrator | 2025-09-19 11:30:19 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:19.072359 | orchestrator | 2025-09-19 11:30:19 | INFO  | Task 4f133ac2-a9c0-47ea-9192-cc216da0888c is in state STARTED 2025-09-19 11:30:19.072368 | orchestrator | 2025-09-19 11:30:19 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state STARTED 2025-09-19 11:30:19.072398 | orchestrator | 2025-09-19 11:30:19 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:19.072408 | orchestrator | 2025-09-19 11:30:19 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:19.072417 | orchestrator | 2025-09-19 11:30:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:22.320930 | orchestrator | 2025-09-19 11:30:22 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:22.321018 | orchestrator | 2025-09-19 11:30:22 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:30:22.321033 | orchestrator | 2025-09-19 11:30:22 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:22.321045 | orchestrator | 2025-09-19 11:30:22 | INFO  | Task 4f133ac2-a9c0-47ea-9192-cc216da0888c is in state STARTED 2025-09-19 11:30:22.321056 | orchestrator | 2025-09-19 11:30:22 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state STARTED 2025-09-19 11:30:22.321067 | orchestrator | 2025-09-19 11:30:22 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:22.321078 | orchestrator | 2025-09-19 11:30:22 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:22.321089 | orchestrator | 2025-09-19 11:30:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:25.264798 | orchestrator | 2025-09-19 11:30:25 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:25.264882 | orchestrator | 2025-09-19 11:30:25 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:30:25.264896 | orchestrator | 2025-09-19 11:30:25 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:25.265191 | orchestrator | 2025-09-19 11:30:25 | INFO  | Task 4f133ac2-a9c0-47ea-9192-cc216da0888c is in state SUCCESS 2025-09-19 11:30:25.265206 | orchestrator | 2025-09-19 11:30:25 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state STARTED 2025-09-19 11:30:25.265217 | orchestrator | 2025-09-19 11:30:25 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:25.265227 | orchestrator | 2025-09-19 11:30:25 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:25.265283 | orchestrator | 2025-09-19 11:30:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:28.280385 | orchestrator | 2025-09-19 11:30:28 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:28.281788 | orchestrator | 2025-09-19 11:30:28 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:30:28.283583 | orchestrator | 2025-09-19 11:30:28 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:28.298725 | orchestrator | 2025-09-19 11:30:28 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state STARTED 2025-09-19 11:30:28.304748 | orchestrator | 2025-09-19 11:30:28 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:28.307674 | orchestrator | 2025-09-19 11:30:28 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:28.307745 | orchestrator | 2025-09-19 11:30:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:31.347432 | orchestrator | 2025-09-19 11:30:31 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:31.348703 | orchestrator | 2025-09-19 11:30:31 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:30:31.350615 | orchestrator | 2025-09-19 11:30:31 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:31.350998 | orchestrator | 2025-09-19 11:30:31 | INFO  | Task 3d867809-e4aa-449a-92b6-373caee0683a is in state SUCCESS 2025-09-19 11:30:31.352907 | orchestrator | 2025-09-19 11:30:31 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:31.353399 | orchestrator | 2025-09-19 11:30:31 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:31.355393 | orchestrator | 2025-09-19 11:30:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:34.404987 | orchestrator | 2025-09-19 11:30:34 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:34.406282 | orchestrator | 2025-09-19 11:30:34 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:30:34.406391 | orchestrator | 2025-09-19 11:30:34 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:34.406415 | orchestrator | 2025-09-19 11:30:34 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:34.407153 | orchestrator | 2025-09-19 11:30:34 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:34.407174 | orchestrator | 2025-09-19 11:30:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:37.517347 | orchestrator | 2025-09-19 11:30:37 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:37.517427 | orchestrator | 2025-09-19 11:30:37 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:30:37.517440 | orchestrator | 2025-09-19 11:30:37 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:37.517451 | orchestrator | 2025-09-19 11:30:37 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:37.517462 | orchestrator | 2025-09-19 11:30:37 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:37.517472 | orchestrator | 2025-09-19 11:30:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:40.492624 | orchestrator | 2025-09-19 11:30:40 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:40.494725 | orchestrator | 2025-09-19 11:30:40 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:30:40.497651 | orchestrator | 2025-09-19 11:30:40 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:40.501959 | orchestrator | 2025-09-19 11:30:40 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:40.503045 | orchestrator | 2025-09-19 11:30:40 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:40.503069 | orchestrator | 2025-09-19 11:30:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:43.581101 | orchestrator | 2025-09-19 11:30:43 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:43.583761 | orchestrator | 2025-09-19 11:30:43 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:30:43.586986 | orchestrator | 2025-09-19 11:30:43 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:43.588854 | orchestrator | 2025-09-19 11:30:43 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:43.590571 | orchestrator | 2025-09-19 11:30:43 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:43.590625 | orchestrator | 2025-09-19 11:30:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:46.619006 | orchestrator | 2025-09-19 11:30:46 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:46.620457 | orchestrator | 2025-09-19 11:30:46 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:30:46.620724 | orchestrator | 2025-09-19 11:30:46 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:46.621421 | orchestrator | 2025-09-19 11:30:46 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:46.624113 | orchestrator | 2025-09-19 11:30:46 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:46.624171 | orchestrator | 2025-09-19 11:30:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:49.661385 | orchestrator | 2025-09-19 11:30:49 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:49.661442 | orchestrator | 2025-09-19 11:30:49 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state STARTED 2025-09-19 11:30:49.663646 | orchestrator | 2025-09-19 11:30:49 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:49.667750 | orchestrator | 2025-09-19 11:30:49 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:49.671003 | orchestrator | 2025-09-19 11:30:49 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:49.671021 | orchestrator | 2025-09-19 11:30:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:52.719584 | orchestrator | 2025-09-19 11:30:52 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:52.728322 | orchestrator | 2025-09-19 11:30:52.728392 | orchestrator | 2025-09-19 11:30:52.728405 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-19 11:30:52.728417 | orchestrator | 2025-09-19 11:30:52.728429 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-19 11:30:52.728441 | orchestrator | Friday 19 September 2025 11:29:44 +0000 (0:00:00.283) 0:00:00.283 ****** 2025-09-19 11:30:52.728452 | orchestrator | ok: [testbed-manager] => { 2025-09-19 11:30:52.728543 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-19 11:30:52.728557 | orchestrator | } 2025-09-19 11:30:52.728569 | orchestrator | 2025-09-19 11:30:52.728581 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-19 11:30:52.728592 | orchestrator | Friday 19 September 2025 11:29:44 +0000 (0:00:00.137) 0:00:00.420 ****** 2025-09-19 11:30:52.728603 | orchestrator | ok: [testbed-manager] 2025-09-19 11:30:52.728615 | orchestrator | 2025-09-19 11:30:52.728626 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-19 11:30:52.728637 | orchestrator | Friday 19 September 2025 11:29:46 +0000 (0:00:01.242) 0:00:01.663 ****** 2025-09-19 11:30:52.728649 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-19 11:30:52.728660 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-19 11:30:52.728671 | orchestrator | 2025-09-19 11:30:52.728682 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-19 11:30:52.728693 | orchestrator | Friday 19 September 2025 11:29:47 +0000 (0:00:01.618) 0:00:03.281 ****** 2025-09-19 11:30:52.728704 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:52.728715 | orchestrator | 2025-09-19 11:30:52.728726 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-19 11:30:52.728737 | orchestrator | Friday 19 September 2025 11:29:49 +0000 (0:00:02.328) 0:00:05.609 ****** 2025-09-19 11:30:52.728748 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:52.728779 | orchestrator | 2025-09-19 11:30:52.728791 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-19 11:30:52.728802 | orchestrator | Friday 19 September 2025 11:29:51 +0000 (0:00:01.987) 0:00:07.597 ****** 2025-09-19 11:30:52.728813 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-19 11:30:52.728823 | orchestrator | ok: [testbed-manager] 2025-09-19 11:30:52.728834 | orchestrator | 2025-09-19 11:30:52.728845 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-19 11:30:52.728856 | orchestrator | Friday 19 September 2025 11:30:18 +0000 (0:00:26.623) 0:00:34.221 ****** 2025-09-19 11:30:52.728867 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:52.728878 | orchestrator | 2025-09-19 11:30:52.728889 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:30:52.728900 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:52.728913 | orchestrator | 2025-09-19 11:30:52.728923 | orchestrator | 2025-09-19 11:30:52.728934 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:30:52.728945 | orchestrator | Friday 19 September 2025 11:30:22 +0000 (0:00:03.800) 0:00:38.022 ****** 2025-09-19 11:30:52.728956 | orchestrator | =============================================================================== 2025-09-19 11:30:52.728967 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.62s 2025-09-19 11:30:52.728978 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.80s 2025-09-19 11:30:52.728989 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.33s 2025-09-19 11:30:52.729000 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.99s 2025-09-19 11:30:52.729011 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.62s 2025-09-19 11:30:52.729021 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.24s 2025-09-19 11:30:52.729032 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.14s 2025-09-19 11:30:52.729069 | orchestrator | 2025-09-19 11:30:52.729080 | orchestrator | 2025-09-19 11:30:52.729091 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-19 11:30:52.729102 | orchestrator | 2025-09-19 11:30:52.729113 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-19 11:30:52.729124 | orchestrator | Friday 19 September 2025 11:29:44 +0000 (0:00:00.619) 0:00:00.619 ****** 2025-09-19 11:30:52.729135 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-19 11:30:52.729164 | orchestrator | 2025-09-19 11:30:52.729200 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-19 11:30:52.729213 | orchestrator | Friday 19 September 2025 11:29:45 +0000 (0:00:00.270) 0:00:00.890 ****** 2025-09-19 11:30:52.729225 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-19 11:30:52.729237 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-19 11:30:52.729249 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-19 11:30:52.729261 | orchestrator | 2025-09-19 11:30:52.729274 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-19 11:30:52.729292 | orchestrator | Friday 19 September 2025 11:29:46 +0000 (0:00:01.400) 0:00:02.290 ****** 2025-09-19 11:30:52.729304 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:52.729315 | orchestrator | 2025-09-19 11:30:52.729325 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-19 11:30:52.729336 | orchestrator | Friday 19 September 2025 11:29:48 +0000 (0:00:01.783) 0:00:04.074 ****** 2025-09-19 11:30:52.729362 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-19 11:30:52.729385 | orchestrator | ok: [testbed-manager] 2025-09-19 11:30:52.729396 | orchestrator | 2025-09-19 11:30:52.729407 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-19 11:30:52.729418 | orchestrator | Friday 19 September 2025 11:30:20 +0000 (0:00:31.738) 0:00:35.813 ****** 2025-09-19 11:30:52.729428 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:52.729439 | orchestrator | 2025-09-19 11:30:52.729450 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-19 11:30:52.729461 | orchestrator | Friday 19 September 2025 11:30:21 +0000 (0:00:01.611) 0:00:37.425 ****** 2025-09-19 11:30:52.729472 | orchestrator | ok: [testbed-manager] 2025-09-19 11:30:52.729483 | orchestrator | 2025-09-19 11:30:52.729494 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-19 11:30:52.729505 | orchestrator | Friday 19 September 2025 11:30:23 +0000 (0:00:01.719) 0:00:39.145 ****** 2025-09-19 11:30:52.729516 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:52.729527 | orchestrator | 2025-09-19 11:30:52.729538 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-19 11:30:52.729549 | orchestrator | Friday 19 September 2025 11:30:25 +0000 (0:00:02.401) 0:00:41.546 ****** 2025-09-19 11:30:52.729559 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:52.729570 | orchestrator | 2025-09-19 11:30:52.729581 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-19 11:30:52.729592 | orchestrator | Friday 19 September 2025 11:30:26 +0000 (0:00:01.003) 0:00:42.550 ****** 2025-09-19 11:30:52.729603 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:52.729613 | orchestrator | 2025-09-19 11:30:52.729624 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-19 11:30:52.729635 | orchestrator | Friday 19 September 2025 11:30:27 +0000 (0:00:00.545) 0:00:43.095 ****** 2025-09-19 11:30:52.729646 | orchestrator | ok: [testbed-manager] 2025-09-19 11:30:52.729656 | orchestrator | 2025-09-19 11:30:52.729667 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:30:52.729678 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:52.729690 | orchestrator | 2025-09-19 11:30:52.729700 | orchestrator | 2025-09-19 11:30:52.729711 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:30:52.729722 | orchestrator | Friday 19 September 2025 11:30:28 +0000 (0:00:00.647) 0:00:43.743 ****** 2025-09-19 11:30:52.729733 | orchestrator | =============================================================================== 2025-09-19 11:30:52.729744 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 31.74s 2025-09-19 11:30:52.729754 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.40s 2025-09-19 11:30:52.729765 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.78s 2025-09-19 11:30:52.729776 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.72s 2025-09-19 11:30:52.729787 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.61s 2025-09-19 11:30:52.729798 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.40s 2025-09-19 11:30:52.729809 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.00s 2025-09-19 11:30:52.729819 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.65s 2025-09-19 11:30:52.729830 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.55s 2025-09-19 11:30:52.729841 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.27s 2025-09-19 11:30:52.729852 | orchestrator | 2025-09-19 11:30:52.729863 | orchestrator | 2025-09-19 11:30:52.729874 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:30:52.729884 | orchestrator | 2025-09-19 11:30:52.729895 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:30:52.729913 | orchestrator | Friday 19 September 2025 11:29:45 +0000 (0:00:00.395) 0:00:00.395 ****** 2025-09-19 11:30:52.729924 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-19 11:30:52.729935 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-19 11:30:52.729945 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-19 11:30:52.729956 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-19 11:30:52.729967 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-19 11:30:52.729978 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-19 11:30:52.729989 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-19 11:30:52.729999 | orchestrator | 2025-09-19 11:30:52.730010 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-19 11:30:52.730089 | orchestrator | 2025-09-19 11:30:52.730100 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-19 11:30:52.730111 | orchestrator | Friday 19 September 2025 11:29:47 +0000 (0:00:02.098) 0:00:02.494 ****** 2025-09-19 11:30:52.730136 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:30:52.730155 | orchestrator | 2025-09-19 11:30:52.730170 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-19 11:30:52.730212 | orchestrator | Friday 19 September 2025 11:29:49 +0000 (0:00:02.071) 0:00:04.566 ****** 2025-09-19 11:30:52.730224 | orchestrator | ok: [testbed-manager] 2025-09-19 11:30:52.730235 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:30:52.730246 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:30:52.730256 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:30:52.730267 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:30:52.730285 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:30:52.730296 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:30:52.730307 | orchestrator | 2025-09-19 11:30:52.730318 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-19 11:30:52.730329 | orchestrator | Friday 19 September 2025 11:29:51 +0000 (0:00:01.630) 0:00:06.197 ****** 2025-09-19 11:30:52.730339 | orchestrator | ok: [testbed-manager] 2025-09-19 11:30:52.730350 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:30:52.730361 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:30:52.730371 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:30:52.730382 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:30:52.730393 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:30:52.730403 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:30:52.730414 | orchestrator | 2025-09-19 11:30:52.730425 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-19 11:30:52.730436 | orchestrator | Friday 19 September 2025 11:29:55 +0000 (0:00:04.065) 0:00:10.263 ****** 2025-09-19 11:30:52.730446 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:52.730457 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:30:52.730468 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:30:52.730479 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:30:52.730489 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:30:52.730500 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:30:52.730511 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:30:52.730522 | orchestrator | 2025-09-19 11:30:52.730533 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-19 11:30:52.730543 | orchestrator | Friday 19 September 2025 11:29:58 +0000 (0:00:02.450) 0:00:12.714 ****** 2025-09-19 11:30:52.730554 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:52.730565 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:30:52.730576 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:30:52.730587 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:30:52.730597 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:30:52.730616 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:30:52.730627 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:30:52.730637 | orchestrator | 2025-09-19 11:30:52.730654 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-19 11:30:52.730672 | orchestrator | Friday 19 September 2025 11:30:08 +0000 (0:00:10.376) 0:00:23.090 ****** 2025-09-19 11:30:52.730691 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:30:52.730708 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:30:52.730725 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:30:52.730743 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:30:52.730760 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:30:52.730777 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:30:52.730792 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:52.730808 | orchestrator | 2025-09-19 11:30:52.730825 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-19 11:30:52.730842 | orchestrator | Friday 19 September 2025 11:30:31 +0000 (0:00:22.711) 0:00:45.801 ****** 2025-09-19 11:30:52.730861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:30:52.730881 | orchestrator | 2025-09-19 11:30:52.730899 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-19 11:30:52.730917 | orchestrator | Friday 19 September 2025 11:30:33 +0000 (0:00:01.826) 0:00:47.628 ****** 2025-09-19 11:30:52.730936 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-19 11:30:52.730953 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-19 11:30:52.730972 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-19 11:30:52.730990 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-19 11:30:52.731009 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-19 11:30:52.731026 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-19 11:30:52.731045 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-19 11:30:52.731057 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-19 11:30:52.731068 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-19 11:30:52.731079 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-19 11:30:52.731090 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-19 11:30:52.731100 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-19 11:30:52.731111 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-19 11:30:52.731122 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-19 11:30:52.731132 | orchestrator | 2025-09-19 11:30:52.731143 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-19 11:30:52.731154 | orchestrator | Friday 19 September 2025 11:30:37 +0000 (0:00:04.092) 0:00:51.721 ****** 2025-09-19 11:30:52.731165 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:30:52.731176 | orchestrator | ok: [testbed-manager] 2025-09-19 11:30:52.731241 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:30:52.731252 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:30:52.731262 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:30:52.731273 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:30:52.731284 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:30:52.731293 | orchestrator | 2025-09-19 11:30:52.731303 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-19 11:30:52.731312 | orchestrator | Friday 19 September 2025 11:30:38 +0000 (0:00:01.399) 0:00:53.120 ****** 2025-09-19 11:30:52.731322 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:30:52.731332 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:52.731342 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:30:52.731358 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:30:52.731368 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:30:52.731387 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:30:52.731397 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:30:52.731406 | orchestrator | 2025-09-19 11:30:52.731416 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-19 11:30:52.731435 | orchestrator | Friday 19 September 2025 11:30:39 +0000 (0:00:01.154) 0:00:54.274 ****** 2025-09-19 11:30:52.731445 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:30:52.731455 | orchestrator | ok: [testbed-manager] 2025-09-19 11:30:52.731464 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:30:52.731474 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:30:52.731484 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:30:52.731493 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:30:52.731502 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:30:52.731512 | orchestrator | 2025-09-19 11:30:52.731522 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-19 11:30:52.731532 | orchestrator | Friday 19 September 2025 11:30:41 +0000 (0:00:01.297) 0:00:55.572 ****** 2025-09-19 11:30:52.731541 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:30:52.731551 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:30:52.731560 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:30:52.731570 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:30:52.731579 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:30:52.731589 | orchestrator | ok: [testbed-manager] 2025-09-19 11:30:52.731598 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:30:52.731608 | orchestrator | 2025-09-19 11:30:52.731617 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-19 11:30:52.731627 | orchestrator | Friday 19 September 2025 11:30:43 +0000 (0:00:02.148) 0:00:57.720 ****** 2025-09-19 11:30:52.731637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-19 11:30:52.731648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:30:52.731658 | orchestrator | 2025-09-19 11:30:52.731668 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-19 11:30:52.731678 | orchestrator | Friday 19 September 2025 11:30:44 +0000 (0:00:01.302) 0:00:59.023 ****** 2025-09-19 11:30:52.731687 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:52.731697 | orchestrator | 2025-09-19 11:30:52.731707 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-19 11:30:52.731716 | orchestrator | Friday 19 September 2025 11:30:45 +0000 (0:00:01.505) 0:01:00.529 ****** 2025-09-19 11:30:52.731726 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:30:52.731735 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:30:52.731745 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:30:52.731755 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:52.731764 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:30:52.731774 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:30:52.731783 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:30:52.731793 | orchestrator | 2025-09-19 11:30:52.731802 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:30:52.731812 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:52.731822 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:52.731832 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:52.731842 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:52.731857 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:52.731867 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:52.731877 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:52.731886 | orchestrator | 2025-09-19 11:30:52.731896 | orchestrator | 2025-09-19 11:30:52.731906 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:30:52.731915 | orchestrator | Friday 19 September 2025 11:30:49 +0000 (0:00:03.329) 0:01:03.858 ****** 2025-09-19 11:30:52.731925 | orchestrator | =============================================================================== 2025-09-19 11:30:52.731935 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 22.71s 2025-09-19 11:30:52.731944 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.38s 2025-09-19 11:30:52.731954 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.09s 2025-09-19 11:30:52.731963 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.07s 2025-09-19 11:30:52.731973 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.33s 2025-09-19 11:30:52.731982 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.45s 2025-09-19 11:30:52.731992 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.15s 2025-09-19 11:30:52.732001 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.10s 2025-09-19 11:30:52.732011 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.07s 2025-09-19 11:30:52.732020 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.83s 2025-09-19 11:30:52.732030 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.63s 2025-09-19 11:30:52.732044 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.51s 2025-09-19 11:30:52.732054 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.40s 2025-09-19 11:30:52.732064 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.30s 2025-09-19 11:30:52.732073 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.30s 2025-09-19 11:30:52.732083 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.15s 2025-09-19 11:30:52.732093 | orchestrator | 2025-09-19 11:30:52 | INFO  | Task d2824b18-14fd-4675-b635-be44f321f91d is in state SUCCESS 2025-09-19 11:30:52.732102 | orchestrator | 2025-09-19 11:30:52 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:52.732112 | orchestrator | 2025-09-19 11:30:52 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:52.732122 | orchestrator | 2025-09-19 11:30:52 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state STARTED 2025-09-19 11:30:52.732132 | orchestrator | 2025-09-19 11:30:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:55.764975 | orchestrator | 2025-09-19 11:30:55 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:55.766635 | orchestrator | 2025-09-19 11:30:55 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:55.768317 | orchestrator | 2025-09-19 11:30:55 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:55.769446 | orchestrator | 2025-09-19 11:30:55 | INFO  | Task 1ff77670-a261-4011-b570-6677feef4c57 is in state SUCCESS 2025-09-19 11:30:55.769483 | orchestrator | 2025-09-19 11:30:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:58.812716 | orchestrator | 2025-09-19 11:30:58 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:30:58.812992 | orchestrator | 2025-09-19 11:30:58 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:30:58.813886 | orchestrator | 2025-09-19 11:30:58 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:30:58.813911 | orchestrator | 2025-09-19 11:30:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:01.846924 | orchestrator | 2025-09-19 11:31:01 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:01.849443 | orchestrator | 2025-09-19 11:31:01 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:31:01.850235 | orchestrator | 2025-09-19 11:31:01 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:01.850272 | orchestrator | 2025-09-19 11:31:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:04.902669 | orchestrator | 2025-09-19 11:31:04 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:04.902775 | orchestrator | 2025-09-19 11:31:04 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:31:04.903563 | orchestrator | 2025-09-19 11:31:04 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:04.904151 | orchestrator | 2025-09-19 11:31:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:07.960602 | orchestrator | 2025-09-19 11:31:07 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:07.963691 | orchestrator | 2025-09-19 11:31:07 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:31:07.964315 | orchestrator | 2025-09-19 11:31:07 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:07.964340 | orchestrator | 2025-09-19 11:31:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:11.079935 | orchestrator | 2025-09-19 11:31:11 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:11.080030 | orchestrator | 2025-09-19 11:31:11 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:31:11.080043 | orchestrator | 2025-09-19 11:31:11 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:11.082598 | orchestrator | 2025-09-19 11:31:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:14.123484 | orchestrator | 2025-09-19 11:31:14 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:14.123571 | orchestrator | 2025-09-19 11:31:14 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:31:14.123586 | orchestrator | 2025-09-19 11:31:14 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:14.123598 | orchestrator | 2025-09-19 11:31:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:17.164398 | orchestrator | 2025-09-19 11:31:17 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:17.165206 | orchestrator | 2025-09-19 11:31:17 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:31:17.166839 | orchestrator | 2025-09-19 11:31:17 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:17.167997 | orchestrator | 2025-09-19 11:31:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:20.198761 | orchestrator | 2025-09-19 11:31:20 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:20.199453 | orchestrator | 2025-09-19 11:31:20 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:31:20.200676 | orchestrator | 2025-09-19 11:31:20 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:20.200953 | orchestrator | 2025-09-19 11:31:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:23.241902 | orchestrator | 2025-09-19 11:31:23 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:23.242269 | orchestrator | 2025-09-19 11:31:23 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:31:23.243495 | orchestrator | 2025-09-19 11:31:23 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:23.243524 | orchestrator | 2025-09-19 11:31:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:26.290804 | orchestrator | 2025-09-19 11:31:26 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:26.291804 | orchestrator | 2025-09-19 11:31:26 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:31:26.293792 | orchestrator | 2025-09-19 11:31:26 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:26.293816 | orchestrator | 2025-09-19 11:31:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:29.333954 | orchestrator | 2025-09-19 11:31:29 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:29.335977 | orchestrator | 2025-09-19 11:31:29 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:31:29.337035 | orchestrator | 2025-09-19 11:31:29 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:29.337532 | orchestrator | 2025-09-19 11:31:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:32.384224 | orchestrator | 2025-09-19 11:31:32 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:32.384680 | orchestrator | 2025-09-19 11:31:32 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:31:32.386577 | orchestrator | 2025-09-19 11:31:32 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:32.386615 | orchestrator | 2025-09-19 11:31:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:35.428903 | orchestrator | 2025-09-19 11:31:35 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:35.433147 | orchestrator | 2025-09-19 11:31:35 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:31:35.433177 | orchestrator | 2025-09-19 11:31:35 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:35.437397 | orchestrator | 2025-09-19 11:31:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:38.480024 | orchestrator | 2025-09-19 11:31:38 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:38.481335 | orchestrator | 2025-09-19 11:31:38 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:31:38.483535 | orchestrator | 2025-09-19 11:31:38 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:38.483619 | orchestrator | 2025-09-19 11:31:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:41.524373 | orchestrator | 2025-09-19 11:31:41 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:41.524567 | orchestrator | 2025-09-19 11:31:41 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:31:41.526457 | orchestrator | 2025-09-19 11:31:41 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:41.526485 | orchestrator | 2025-09-19 11:31:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:44.564335 | orchestrator | 2025-09-19 11:31:44 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:44.566293 | orchestrator | 2025-09-19 11:31:44 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:31:44.568855 | orchestrator | 2025-09-19 11:31:44 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:44.568889 | orchestrator | 2025-09-19 11:31:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:47.615273 | orchestrator | 2025-09-19 11:31:47 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:47.617654 | orchestrator | 2025-09-19 11:31:47 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state STARTED 2025-09-19 11:31:47.618484 | orchestrator | 2025-09-19 11:31:47 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:47.618517 | orchestrator | 2025-09-19 11:31:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:50.657726 | orchestrator | 2025-09-19 11:31:50 | INFO  | Task f185908c-ff61-49a8-98be-fb22c3eb954f is in state STARTED 2025-09-19 11:31:50.657904 | orchestrator | 2025-09-19 11:31:50 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:50.658651 | orchestrator | 2025-09-19 11:31:50 | INFO  | Task d1814d81-e07e-4be1-b72a-af836f1122e9 is in state STARTED 2025-09-19 11:31:50.660623 | orchestrator | 2025-09-19 11:31:50 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:31:50.667665 | orchestrator | 2025-09-19 11:31:50 | INFO  | Task 7d5fd74e-7b17-4dc7-b839-8778c00bb4ff is in state SUCCESS 2025-09-19 11:31:50.669620 | orchestrator | 2025-09-19 11:31:50.670383 | orchestrator | 2025-09-19 11:31:50.670405 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-19 11:31:50.670420 | orchestrator | 2025-09-19 11:31:50.670431 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-19 11:31:50.670444 | orchestrator | Friday 19 September 2025 11:30:02 +0000 (0:00:00.196) 0:00:00.196 ****** 2025-09-19 11:31:50.670455 | orchestrator | ok: [testbed-manager] 2025-09-19 11:31:50.670468 | orchestrator | 2025-09-19 11:31:50.670479 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-19 11:31:50.670490 | orchestrator | Friday 19 September 2025 11:30:03 +0000 (0:00:01.032) 0:00:01.229 ****** 2025-09-19 11:31:50.670501 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-19 11:31:50.670514 | orchestrator | 2025-09-19 11:31:50.670525 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-19 11:31:50.670536 | orchestrator | Friday 19 September 2025 11:30:04 +0000 (0:00:00.475) 0:00:01.705 ****** 2025-09-19 11:31:50.670547 | orchestrator | changed: [testbed-manager] 2025-09-19 11:31:50.670558 | orchestrator | 2025-09-19 11:31:50.670569 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-19 11:31:50.670580 | orchestrator | Friday 19 September 2025 11:30:05 +0000 (0:00:01.489) 0:00:03.194 ****** 2025-09-19 11:31:50.670591 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-19 11:31:50.670602 | orchestrator | ok: [testbed-manager] 2025-09-19 11:31:50.670613 | orchestrator | 2025-09-19 11:31:50.670624 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-19 11:31:50.670635 | orchestrator | Friday 19 September 2025 11:30:48 +0000 (0:00:42.265) 0:00:45.460 ****** 2025-09-19 11:31:50.670680 | orchestrator | changed: [testbed-manager] 2025-09-19 11:31:50.670691 | orchestrator | 2025-09-19 11:31:50.670702 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:31:50.670713 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:31:50.670726 | orchestrator | 2025-09-19 11:31:50.670736 | orchestrator | 2025-09-19 11:31:50.670747 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:31:50.670758 | orchestrator | Friday 19 September 2025 11:30:52 +0000 (0:00:04.043) 0:00:49.503 ****** 2025-09-19 11:31:50.670769 | orchestrator | =============================================================================== 2025-09-19 11:31:50.670779 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 42.27s 2025-09-19 11:31:50.670790 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.04s 2025-09-19 11:31:50.670801 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.49s 2025-09-19 11:31:50.670811 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.03s 2025-09-19 11:31:50.670822 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.48s 2025-09-19 11:31:50.670833 | orchestrator | 2025-09-19 11:31:50.670843 | orchestrator | 2025-09-19 11:31:50.670868 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-19 11:31:50.670880 | orchestrator | 2025-09-19 11:31:50.670891 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-19 11:31:50.670901 | orchestrator | Friday 19 September 2025 11:29:38 +0000 (0:00:00.258) 0:00:00.259 ****** 2025-09-19 11:31:50.670913 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:31:50.670925 | orchestrator | 2025-09-19 11:31:50.670936 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-19 11:31:50.670946 | orchestrator | Friday 19 September 2025 11:29:39 +0000 (0:00:01.139) 0:00:01.398 ****** 2025-09-19 11:31:50.670957 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 11:31:50.670968 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 11:31:50.670979 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 11:31:50.670989 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 11:31:50.671000 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 11:31:50.671010 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 11:31:50.671021 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 11:31:50.671032 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 11:31:50.671043 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 11:31:50.671053 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 11:31:50.671064 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 11:31:50.671118 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 11:31:50.671130 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 11:31:50.671141 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 11:31:50.671152 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 11:31:50.671164 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 11:31:50.671218 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 11:31:50.671231 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 11:31:50.671242 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 11:31:50.671253 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 11:31:50.671264 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 11:31:50.671275 | orchestrator | 2025-09-19 11:31:50.671286 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-19 11:31:50.671297 | orchestrator | Friday 19 September 2025 11:29:43 +0000 (0:00:03.530) 0:00:04.929 ****** 2025-09-19 11:31:50.671309 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:31:50.671331 | orchestrator | 2025-09-19 11:31:50.671349 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-19 11:31:50.671368 | orchestrator | Friday 19 September 2025 11:29:44 +0000 (0:00:01.242) 0:00:06.171 ****** 2025-09-19 11:31:50.671394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.671411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.671429 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.671441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.671453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.671502 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.671515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.671527 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.671538 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.671554 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.671567 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.671580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.671599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.671621 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.671633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.671645 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.671656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.671672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.671684 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.671695 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.671714 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.671725 | orchestrator | 2025-09-19 11:31:50.671736 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-19 11:31:50.671747 | orchestrator | Friday 19 September 2025 11:29:50 +0000 (0:00:05.863) 0:00:12.035 ****** 2025-09-19 11:31:50.671765 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:31:50.671778 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.671789 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.671801 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:31:50.671813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:31:50.671831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.671843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.671862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:31:50.671874 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:50.671886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.671905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.671917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:31:50.671929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.671940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.671952 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:50.671963 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:50.671979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:31:50.671991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672022 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:31:50.672033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:31:50.672052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672104 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:31:50.672116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:31:50.672134 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672170 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:31:50.672181 | orchestrator | 2025-09-19 11:31:50.672192 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-19 11:31:50.672203 | orchestrator | Friday 19 September 2025 11:29:52 +0000 (0:00:02.069) 0:00:14.104 ****** 2025-09-19 11:31:50.672215 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:31:50.672226 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672244 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:31:50.672267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672290 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:31:50.672305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:31:50.672325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672350 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:50.672369 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:50.672388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:31:50.672419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672460 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:50.672474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:31:50.672485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672524 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:31:50.672535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:31:50.672547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672577 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:31:50.672588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:31:50.672599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.672628 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:31:50.672639 | orchestrator | 2025-09-19 11:31:50.672650 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-19 11:31:50.672661 | orchestrator | Friday 19 September 2025 11:29:55 +0000 (0:00:03.394) 0:00:17.499 ****** 2025-09-19 11:31:50.672671 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:31:50.672682 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:50.672693 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:50.672703 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:50.672714 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:31:50.672725 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:31:50.672735 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:31:50.672746 | orchestrator | 2025-09-19 11:31:50.672761 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-19 11:31:50.672772 | orchestrator | Friday 19 September 2025 11:29:57 +0000 (0:00:01.755) 0:00:19.255 ****** 2025-09-19 11:31:50.672783 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:31:50.672794 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:50.672804 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:50.672815 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:50.672825 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:31:50.672835 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:31:50.672846 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:31:50.672857 | orchestrator | 2025-09-19 11:31:50.672867 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-19 11:31:50.672878 | orchestrator | Friday 19 September 2025 11:29:59 +0000 (0:00:01.495) 0:00:20.750 ****** 2025-09-19 11:31:50.672889 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.672901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.672924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.672936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.672954 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.672965 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.672982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.672994 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.673005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.673017 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.673034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.673053 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.673065 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.673103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.673115 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.673126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.673138 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.673164 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.673181 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.673201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.673212 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.673223 | orchestrator | 2025-09-19 11:31:50.673234 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-19 11:31:50.673245 | orchestrator | Friday 19 September 2025 11:30:04 +0000 (0:00:05.603) 0:00:26.353 ****** 2025-09-19 11:31:50.673256 | orchestrator | [WARNING]: Skipped 2025-09-19 11:31:50.673268 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-19 11:31:50.673279 | orchestrator | to this access issue: 2025-09-19 11:31:50.673290 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-19 11:31:50.673300 | orchestrator | directory 2025-09-19 11:31:50.673312 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:31:50.673323 | orchestrator | 2025-09-19 11:31:50.673334 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-19 11:31:50.673344 | orchestrator | Friday 19 September 2025 11:30:05 +0000 (0:00:01.038) 0:00:27.391 ****** 2025-09-19 11:31:50.673355 | orchestrator | [WARNING]: Skipped 2025-09-19 11:31:50.673366 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-19 11:31:50.673377 | orchestrator | to this access issue: 2025-09-19 11:31:50.673395 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-19 11:31:50.673416 | orchestrator | directory 2025-09-19 11:31:50.673435 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:31:50.673455 | orchestrator | 2025-09-19 11:31:50.673474 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-19 11:31:50.673493 | orchestrator | Friday 19 September 2025 11:30:06 +0000 (0:00:01.159) 0:00:28.551 ****** 2025-09-19 11:31:50.673504 | orchestrator | [WARNING]: Skipped 2025-09-19 11:31:50.673515 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-19 11:31:50.673526 | orchestrator | to this access issue: 2025-09-19 11:31:50.673537 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-19 11:31:50.673549 | orchestrator | directory 2025-09-19 11:31:50.673560 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:31:50.673571 | orchestrator | 2025-09-19 11:31:50.673582 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-19 11:31:50.673593 | orchestrator | Friday 19 September 2025 11:30:07 +0000 (0:00:00.780) 0:00:29.331 ****** 2025-09-19 11:31:50.673604 | orchestrator | [WARNING]: Skipped 2025-09-19 11:31:50.673614 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-19 11:31:50.673625 | orchestrator | to this access issue: 2025-09-19 11:31:50.673636 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-19 11:31:50.673647 | orchestrator | directory 2025-09-19 11:31:50.673666 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:31:50.673677 | orchestrator | 2025-09-19 11:31:50.673688 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-19 11:31:50.673698 | orchestrator | Friday 19 September 2025 11:30:08 +0000 (0:00:00.796) 0:00:30.127 ****** 2025-09-19 11:31:50.673709 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:31:50.673720 | orchestrator | changed: [testbed-manager] 2025-09-19 11:31:50.673731 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:31:50.673742 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:31:50.673752 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:31:50.673763 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:31:50.673774 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:31:50.673785 | orchestrator | 2025-09-19 11:31:50.673796 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-19 11:31:50.673807 | orchestrator | Friday 19 September 2025 11:30:14 +0000 (0:00:05.575) 0:00:35.703 ****** 2025-09-19 11:31:50.673818 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 11:31:50.673829 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 11:31:50.673840 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 11:31:50.673862 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 11:31:50.673874 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 11:31:50.673885 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 11:31:50.673896 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 11:31:50.673907 | orchestrator | 2025-09-19 11:31:50.673918 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-19 11:31:50.673929 | orchestrator | Friday 19 September 2025 11:30:17 +0000 (0:00:03.280) 0:00:38.983 ****** 2025-09-19 11:31:50.673939 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:31:50.673950 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:31:50.673961 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:31:50.673972 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:31:50.673983 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:31:50.673994 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:31:50.674004 | orchestrator | changed: [testbed-manager] 2025-09-19 11:31:50.674015 | orchestrator | 2025-09-19 11:31:50.674141 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-19 11:31:50.674152 | orchestrator | Friday 19 September 2025 11:30:21 +0000 (0:00:03.968) 0:00:42.951 ****** 2025-09-19 11:31:50.674164 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.674176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.674208 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.674220 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.674232 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.674257 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.674269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.674281 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.674292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.674317 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.674329 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.674340 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.674352 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.674371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.674383 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.674395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.674406 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.674430 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:31:50.674450 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.674471 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.674490 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.674510 | orchestrator | 2025-09-19 11:31:50.674530 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-19 11:31:50.674548 | orchestrator | Friday 19 September 2025 11:30:24 +0000 (0:00:02.874) 0:00:45.826 ****** 2025-09-19 11:31:50.674564 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 11:31:50.674575 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 11:31:50.674586 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 11:31:50.674605 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 11:31:50.674616 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 11:31:50.674627 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 11:31:50.674638 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 11:31:50.674648 | orchestrator | 2025-09-19 11:31:50.674659 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-19 11:31:50.674670 | orchestrator | Friday 19 September 2025 11:30:27 +0000 (0:00:03.426) 0:00:49.252 ****** 2025-09-19 11:31:50.674681 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 11:31:50.674692 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 11:31:50.674702 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 11:31:50.674713 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 11:31:50.674724 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 11:31:50.674743 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 11:31:50.674754 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 11:31:50.674764 | orchestrator | 2025-09-19 11:31:50.674775 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-19 11:31:50.674786 | orchestrator | Friday 19 September 2025 11:30:29 +0000 (0:00:02.314) 0:00:51.566 ****** 2025-09-19 11:31:50.674797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.674815 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.674827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.674839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.674850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.674868 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.674879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.674898 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.674914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.674926 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:31:50.674937 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.674949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.674977 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.674996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.675008 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.675019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.675035 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.675047 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.675059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.675111 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.675125 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:50.675136 | orchestrator | 2025-09-19 11:31:50.675154 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-19 11:31:50.675166 | orchestrator | Friday 19 September 2025 11:30:32 +0000 (0:00:02.888) 0:00:54.454 ****** 2025-09-19 11:31:50.675177 | orchestrator | changed: [testbed-manager] 2025-09-19 11:31:50.675196 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:31:50.675207 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:31:50.675218 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:31:50.675229 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:31:50.675239 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:31:50.675250 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:31:50.675261 | orchestrator | 2025-09-19 11:31:50.675272 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-19 11:31:50.675283 | orchestrator | Friday 19 September 2025 11:30:34 +0000 (0:00:01.875) 0:00:56.330 ****** 2025-09-19 11:31:50.675294 | orchestrator | changed: [testbed-manager] 2025-09-19 11:31:50.675304 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:31:50.675315 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:31:50.675326 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:31:50.675336 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:31:50.675347 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:31:50.675358 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:31:50.675369 | orchestrator | 2025-09-19 11:31:50.675380 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 11:31:50.675391 | orchestrator | Friday 19 September 2025 11:30:35 +0000 (0:00:01.266) 0:00:57.597 ****** 2025-09-19 11:31:50.675402 | orchestrator | 2025-09-19 11:31:50.675412 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 11:31:50.675423 | orchestrator | Friday 19 September 2025 11:30:36 +0000 (0:00:00.071) 0:00:57.669 ****** 2025-09-19 11:31:50.675434 | orchestrator | 2025-09-19 11:31:50.675445 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 11:31:50.675456 | orchestrator | Friday 19 September 2025 11:30:36 +0000 (0:00:00.084) 0:00:57.753 ****** 2025-09-19 11:31:50.675467 | orchestrator | 2025-09-19 11:31:50.675478 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 11:31:50.675494 | orchestrator | Friday 19 September 2025 11:30:36 +0000 (0:00:00.067) 0:00:57.821 ****** 2025-09-19 11:31:50.675512 | orchestrator | 2025-09-19 11:31:50.675531 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 11:31:50.675551 | orchestrator | Friday 19 September 2025 11:30:36 +0000 (0:00:00.223) 0:00:58.044 ****** 2025-09-19 11:31:50.675570 | orchestrator | 2025-09-19 11:31:50.675588 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 11:31:50.675600 | orchestrator | Friday 19 September 2025 11:30:36 +0000 (0:00:00.063) 0:00:58.108 ****** 2025-09-19 11:31:50.675610 | orchestrator | 2025-09-19 11:31:50.675621 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 11:31:50.675632 | orchestrator | Friday 19 September 2025 11:30:36 +0000 (0:00:00.066) 0:00:58.174 ****** 2025-09-19 11:31:50.675642 | orchestrator | 2025-09-19 11:31:50.675653 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-19 11:31:50.675664 | orchestrator | Friday 19 September 2025 11:30:36 +0000 (0:00:00.070) 0:00:58.245 ****** 2025-09-19 11:31:50.675675 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:31:50.675685 | orchestrator | changed: [testbed-manager] 2025-09-19 11:31:50.675696 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:31:50.675707 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:31:50.675717 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:31:50.675728 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:31:50.675739 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:31:50.675749 | orchestrator | 2025-09-19 11:31:50.675760 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-19 11:31:50.675771 | orchestrator | Friday 19 September 2025 11:31:07 +0000 (0:00:30.889) 0:01:29.135 ****** 2025-09-19 11:31:50.675781 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:31:50.675792 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:31:50.675803 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:31:50.675814 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:31:50.675834 | orchestrator | changed: [testbed-manager] 2025-09-19 11:31:50.675845 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:31:50.675855 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:31:50.675866 | orchestrator | 2025-09-19 11:31:50.675877 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-19 11:31:50.675888 | orchestrator | Friday 19 September 2025 11:31:40 +0000 (0:00:32.530) 0:02:01.665 ****** 2025-09-19 11:31:50.675898 | orchestrator | ok: [testbed-manager] 2025-09-19 11:31:50.675910 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:50.675920 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:50.675931 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:50.675942 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:31:50.675952 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:31:50.675963 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:31:50.675973 | orchestrator | 2025-09-19 11:31:50.675984 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-19 11:31:50.675995 | orchestrator | Friday 19 September 2025 11:31:42 +0000 (0:00:02.223) 0:02:03.889 ****** 2025-09-19 11:31:50.676006 | orchestrator | changed: [testbed-manager] 2025-09-19 11:31:50.676017 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:31:50.676027 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:31:50.676038 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:31:50.676049 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:31:50.676060 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:31:50.676098 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:31:50.676111 | orchestrator | 2025-09-19 11:31:50.676122 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:31:50.676133 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 11:31:50.676145 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 11:31:50.676164 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 11:31:50.676176 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 11:31:50.676187 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 11:31:50.676197 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 11:31:50.676208 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 11:31:50.676219 | orchestrator | 2025-09-19 11:31:50.676229 | orchestrator | 2025-09-19 11:31:50.676240 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:31:50.676251 | orchestrator | Friday 19 September 2025 11:31:47 +0000 (0:00:04.906) 0:02:08.795 ****** 2025-09-19 11:31:50.676262 | orchestrator | =============================================================================== 2025-09-19 11:31:50.676272 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.53s 2025-09-19 11:31:50.676283 | orchestrator | common : Restart fluentd container ------------------------------------- 30.89s 2025-09-19 11:31:50.676294 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.86s 2025-09-19 11:31:50.676305 | orchestrator | common : Copying over config.json files for services -------------------- 5.60s 2025-09-19 11:31:50.676316 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.58s 2025-09-19 11:31:50.676327 | orchestrator | common : Restart cron container ----------------------------------------- 4.91s 2025-09-19 11:31:50.676346 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.97s 2025-09-19 11:31:50.676357 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.53s 2025-09-19 11:31:50.676367 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.43s 2025-09-19 11:31:50.676378 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.39s 2025-09-19 11:31:50.676388 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.28s 2025-09-19 11:31:50.676399 | orchestrator | common : Check common containers ---------------------------------------- 2.89s 2025-09-19 11:31:50.676410 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.87s 2025-09-19 11:31:50.676428 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.31s 2025-09-19 11:31:50.676439 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.22s 2025-09-19 11:31:50.676459 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.07s 2025-09-19 11:31:50.676470 | orchestrator | common : Creating log volume -------------------------------------------- 1.88s 2025-09-19 11:31:50.676481 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.76s 2025-09-19 11:31:50.676491 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.50s 2025-09-19 11:31:50.676502 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.27s 2025-09-19 11:31:50.676513 | orchestrator | 2025-09-19 11:31:50 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:50.676528 | orchestrator | 2025-09-19 11:31:50 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:31:50.676547 | orchestrator | 2025-09-19 11:31:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:53.700107 | orchestrator | 2025-09-19 11:31:53 | INFO  | Task f185908c-ff61-49a8-98be-fb22c3eb954f is in state STARTED 2025-09-19 11:31:53.700250 | orchestrator | 2025-09-19 11:31:53 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:53.700740 | orchestrator | 2025-09-19 11:31:53 | INFO  | Task d1814d81-e07e-4be1-b72a-af836f1122e9 is in state STARTED 2025-09-19 11:31:53.701355 | orchestrator | 2025-09-19 11:31:53 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:31:53.701954 | orchestrator | 2025-09-19 11:31:53 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:53.702664 | orchestrator | 2025-09-19 11:31:53 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:31:53.702688 | orchestrator | 2025-09-19 11:31:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:56.726421 | orchestrator | 2025-09-19 11:31:56 | INFO  | Task f185908c-ff61-49a8-98be-fb22c3eb954f is in state STARTED 2025-09-19 11:31:56.726505 | orchestrator | 2025-09-19 11:31:56 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:56.726844 | orchestrator | 2025-09-19 11:31:56 | INFO  | Task d1814d81-e07e-4be1-b72a-af836f1122e9 is in state STARTED 2025-09-19 11:31:56.729314 | orchestrator | 2025-09-19 11:31:56 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:31:56.729562 | orchestrator | 2025-09-19 11:31:56 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:56.730213 | orchestrator | 2025-09-19 11:31:56 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:31:56.730239 | orchestrator | 2025-09-19 11:31:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:59.748217 | orchestrator | 2025-09-19 11:31:59 | INFO  | Task f185908c-ff61-49a8-98be-fb22c3eb954f is in state STARTED 2025-09-19 11:31:59.748445 | orchestrator | 2025-09-19 11:31:59 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:31:59.749027 | orchestrator | 2025-09-19 11:31:59 | INFO  | Task d1814d81-e07e-4be1-b72a-af836f1122e9 is in state STARTED 2025-09-19 11:31:59.749568 | orchestrator | 2025-09-19 11:31:59 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:31:59.754190 | orchestrator | 2025-09-19 11:31:59 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:31:59.754628 | orchestrator | 2025-09-19 11:31:59 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:31:59.754736 | orchestrator | 2025-09-19 11:31:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:02.785747 | orchestrator | 2025-09-19 11:32:02 | INFO  | Task f185908c-ff61-49a8-98be-fb22c3eb954f is in state STARTED 2025-09-19 11:32:02.785932 | orchestrator | 2025-09-19 11:32:02 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:02.787433 | orchestrator | 2025-09-19 11:32:02 | INFO  | Task d1814d81-e07e-4be1-b72a-af836f1122e9 is in state STARTED 2025-09-19 11:32:02.788103 | orchestrator | 2025-09-19 11:32:02 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:02.788559 | orchestrator | 2025-09-19 11:32:02 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:02.789165 | orchestrator | 2025-09-19 11:32:02 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:02.789191 | orchestrator | 2025-09-19 11:32:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:05.820509 | orchestrator | 2025-09-19 11:32:05 | INFO  | Task f185908c-ff61-49a8-98be-fb22c3eb954f is in state STARTED 2025-09-19 11:32:05.820670 | orchestrator | 2025-09-19 11:32:05 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:05.821218 | orchestrator | 2025-09-19 11:32:05 | INFO  | Task d1814d81-e07e-4be1-b72a-af836f1122e9 is in state STARTED 2025-09-19 11:32:05.821902 | orchestrator | 2025-09-19 11:32:05 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:05.822576 | orchestrator | 2025-09-19 11:32:05 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:05.823979 | orchestrator | 2025-09-19 11:32:05 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:05.824002 | orchestrator | 2025-09-19 11:32:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:08.858418 | orchestrator | 2025-09-19 11:32:08 | INFO  | Task f185908c-ff61-49a8-98be-fb22c3eb954f is in state SUCCESS 2025-09-19 11:32:08.859082 | orchestrator | 2025-09-19 11:32:08 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:08.861363 | orchestrator | 2025-09-19 11:32:08 | INFO  | Task d1814d81-e07e-4be1-b72a-af836f1122e9 is in state STARTED 2025-09-19 11:32:08.862184 | orchestrator | 2025-09-19 11:32:08 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:08.862922 | orchestrator | 2025-09-19 11:32:08 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:08.864141 | orchestrator | 2025-09-19 11:32:08 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:08.864199 | orchestrator | 2025-09-19 11:32:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:11.941159 | orchestrator | 2025-09-19 11:32:11 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:11.941286 | orchestrator | 2025-09-19 11:32:11 | INFO  | Task d1814d81-e07e-4be1-b72a-af836f1122e9 is in state STARTED 2025-09-19 11:32:11.941302 | orchestrator | 2025-09-19 11:32:11 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:11.941314 | orchestrator | 2025-09-19 11:32:11 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:32:11.941325 | orchestrator | 2025-09-19 11:32:11 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:11.941336 | orchestrator | 2025-09-19 11:32:11 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:11.941347 | orchestrator | 2025-09-19 11:32:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:14.990170 | orchestrator | 2025-09-19 11:32:14 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:14.990240 | orchestrator | 2025-09-19 11:32:14 | INFO  | Task d1814d81-e07e-4be1-b72a-af836f1122e9 is in state STARTED 2025-09-19 11:32:14.990251 | orchestrator | 2025-09-19 11:32:14 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:14.990467 | orchestrator | 2025-09-19 11:32:14 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:32:14.991008 | orchestrator | 2025-09-19 11:32:14 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:14.991595 | orchestrator | 2025-09-19 11:32:14 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:14.991613 | orchestrator | 2025-09-19 11:32:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:18.039505 | orchestrator | 2025-09-19 11:32:18 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:18.039593 | orchestrator | 2025-09-19 11:32:18 | INFO  | Task d1814d81-e07e-4be1-b72a-af836f1122e9 is in state STARTED 2025-09-19 11:32:18.039607 | orchestrator | 2025-09-19 11:32:18 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:18.039619 | orchestrator | 2025-09-19 11:32:18 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:32:18.039629 | orchestrator | 2025-09-19 11:32:18 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:18.039641 | orchestrator | 2025-09-19 11:32:18 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:18.039652 | orchestrator | 2025-09-19 11:32:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:21.069120 | orchestrator | 2025-09-19 11:32:21 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:21.069865 | orchestrator | 2025-09-19 11:32:21 | INFO  | Task d1814d81-e07e-4be1-b72a-af836f1122e9 is in state SUCCESS 2025-09-19 11:32:21.070219 | orchestrator | 2025-09-19 11:32:21.070246 | orchestrator | 2025-09-19 11:32:21.070258 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:32:21.070270 | orchestrator | 2025-09-19 11:32:21.070281 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:32:21.070292 | orchestrator | Friday 19 September 2025 11:31:55 +0000 (0:00:00.458) 0:00:00.458 ****** 2025-09-19 11:32:21.070303 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:32:21.070315 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:32:21.070326 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:32:21.070337 | orchestrator | 2025-09-19 11:32:21.070348 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:32:21.070359 | orchestrator | Friday 19 September 2025 11:31:55 +0000 (0:00:00.439) 0:00:00.897 ****** 2025-09-19 11:32:21.070395 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-19 11:32:21.070407 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-19 11:32:21.070417 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-19 11:32:21.070428 | orchestrator | 2025-09-19 11:32:21.070439 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-19 11:32:21.070450 | orchestrator | 2025-09-19 11:32:21.070461 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-19 11:32:21.070472 | orchestrator | Friday 19 September 2025 11:31:56 +0000 (0:00:00.582) 0:00:01.479 ****** 2025-09-19 11:32:21.070483 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:32:21.070494 | orchestrator | 2025-09-19 11:32:21.070505 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-19 11:32:21.070515 | orchestrator | Friday 19 September 2025 11:31:56 +0000 (0:00:00.571) 0:00:02.050 ****** 2025-09-19 11:32:21.070526 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-19 11:32:21.070577 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-19 11:32:21.070588 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-19 11:32:21.070599 | orchestrator | 2025-09-19 11:32:21.070610 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-19 11:32:21.070696 | orchestrator | Friday 19 September 2025 11:31:57 +0000 (0:00:00.809) 0:00:02.860 ****** 2025-09-19 11:32:21.070708 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-19 11:32:21.070719 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-19 11:32:21.070730 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-19 11:32:21.070741 | orchestrator | 2025-09-19 11:32:21.070752 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-19 11:32:21.070762 | orchestrator | Friday 19 September 2025 11:31:59 +0000 (0:00:02.130) 0:00:04.990 ****** 2025-09-19 11:32:21.070773 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:32:21.070784 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:32:21.070795 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:32:21.070806 | orchestrator | 2025-09-19 11:32:21.070817 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-19 11:32:21.070827 | orchestrator | Friday 19 September 2025 11:32:01 +0000 (0:00:01.779) 0:00:06.770 ****** 2025-09-19 11:32:21.070838 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:32:21.070851 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:32:21.070864 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:32:21.070876 | orchestrator | 2025-09-19 11:32:21.070889 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:32:21.070902 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:32:21.070940 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:32:21.070992 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:32:21.071005 | orchestrator | 2025-09-19 11:32:21.071040 | orchestrator | 2025-09-19 11:32:21.071053 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:32:21.071075 | orchestrator | Friday 19 September 2025 11:32:07 +0000 (0:00:06.366) 0:00:13.137 ****** 2025-09-19 11:32:21.071088 | orchestrator | =============================================================================== 2025-09-19 11:32:21.071100 | orchestrator | memcached : Restart memcached container --------------------------------- 6.37s 2025-09-19 11:32:21.071112 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.13s 2025-09-19 11:32:21.071125 | orchestrator | memcached : Check memcached container ----------------------------------- 1.78s 2025-09-19 11:32:21.071171 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.81s 2025-09-19 11:32:21.071187 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-09-19 11:32:21.071200 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.57s 2025-09-19 11:32:21.071211 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2025-09-19 11:32:21.071222 | orchestrator | 2025-09-19 11:32:21.071406 | orchestrator | 2025-09-19 11:32:21.071421 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:32:21.071432 | orchestrator | 2025-09-19 11:32:21.071443 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:32:21.071453 | orchestrator | Friday 19 September 2025 11:31:54 +0000 (0:00:00.504) 0:00:00.504 ****** 2025-09-19 11:32:21.071464 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:32:21.071475 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:32:21.071486 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:32:21.071496 | orchestrator | 2025-09-19 11:32:21.071515 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:32:21.071527 | orchestrator | Friday 19 September 2025 11:31:54 +0000 (0:00:00.474) 0:00:00.978 ****** 2025-09-19 11:32:21.071537 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-19 11:32:21.071548 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-19 11:32:21.071559 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-19 11:32:21.071570 | orchestrator | 2025-09-19 11:32:21.071581 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-19 11:32:21.071591 | orchestrator | 2025-09-19 11:32:21.071603 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-19 11:32:21.071613 | orchestrator | Friday 19 September 2025 11:31:55 +0000 (0:00:00.633) 0:00:01.612 ****** 2025-09-19 11:32:21.071624 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:32:21.071635 | orchestrator | 2025-09-19 11:32:21.071646 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-19 11:32:21.071656 | orchestrator | Friday 19 September 2025 11:31:56 +0000 (0:00:00.847) 0:00:02.460 ****** 2025-09-19 11:32:21.071670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.071692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.071712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.071749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.071782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.071811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.071832 | orchestrator | 2025-09-19 11:32:21.071851 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-19 11:32:21.071870 | orchestrator | Friday 19 September 2025 11:31:57 +0000 (0:00:01.363) 0:00:03.824 ****** 2025-09-19 11:32:21.071883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.071894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.071906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.071925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.071937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.071962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.071974 | orchestrator | 2025-09-19 11:32:21.071985 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-19 11:32:21.071996 | orchestrator | Friday 19 September 2025 11:32:00 +0000 (0:00:02.905) 0:00:06.729 ****** 2025-09-19 11:32:21.072008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.072080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.072093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.072115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.072127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.072154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.072168 | orchestrator | 2025-09-19 11:32:21.072180 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-19 11:32:21.072193 | orchestrator | Friday 19 September 2025 11:32:03 +0000 (0:00:02.577) 0:00:09.307 ****** 2025-09-19 11:32:21.072205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.072218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.072231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.072251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.072264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.072283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:32:21.072296 | orchestrator | 2025-09-19 11:32:21.072309 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 11:32:21.072326 | orchestrator | Friday 19 September 2025 11:32:04 +0000 (0:00:01.766) 0:00:11.074 ****** 2025-09-19 11:32:21.072339 | orchestrator | 2025-09-19 11:32:21.072351 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 11:32:21.072364 | orchestrator | Friday 19 September 2025 11:32:04 +0000 (0:00:00.088) 0:00:11.162 ****** 2025-09-19 11:32:21.072377 | orchestrator | 2025-09-19 11:32:21.072388 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 11:32:21.072399 | orchestrator | Friday 19 September 2025 11:32:05 +0000 (0:00:00.062) 0:00:11.224 ****** 2025-09-19 11:32:21.072410 | orchestrator | 2025-09-19 11:32:21.072420 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-19 11:32:21.072431 | orchestrator | Friday 19 September 2025 11:32:05 +0000 (0:00:00.062) 0:00:11.287 ****** 2025-09-19 11:32:21.072441 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:32:21.072452 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:32:21.072463 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:32:21.072474 | orchestrator | 2025-09-19 11:32:21.072485 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-19 11:32:21.072495 | orchestrator | Friday 19 September 2025 11:32:08 +0000 (0:00:03.251) 0:00:14.538 ****** 2025-09-19 11:32:21.072506 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:32:21.072516 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:32:21.072527 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:32:21.072537 | orchestrator | 2025-09-19 11:32:21.072546 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:32:21.072561 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:32:21.072572 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:32:21.072582 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:32:21.072591 | orchestrator | 2025-09-19 11:32:21.072641 | orchestrator | 2025-09-19 11:32:21.072653 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:32:21.072663 | orchestrator | Friday 19 September 2025 11:32:18 +0000 (0:00:10.296) 0:00:24.834 ****** 2025-09-19 11:32:21.072673 | orchestrator | =============================================================================== 2025-09-19 11:32:21.072682 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.30s 2025-09-19 11:32:21.072691 | orchestrator | redis : Restart redis container ----------------------------------------- 3.25s 2025-09-19 11:32:21.072701 | orchestrator | redis : Copying over default config.json files -------------------------- 2.91s 2025-09-19 11:32:21.072711 | orchestrator | redis : Copying over redis config files --------------------------------- 2.58s 2025-09-19 11:32:21.072720 | orchestrator | redis : Check redis containers ------------------------------------------ 1.77s 2025-09-19 11:32:21.072729 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.36s 2025-09-19 11:32:21.072739 | orchestrator | redis : include_tasks --------------------------------------------------- 0.85s 2025-09-19 11:32:21.072748 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-09-19 11:32:21.072757 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2025-09-19 11:32:21.072767 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.21s 2025-09-19 11:32:21.072777 | orchestrator | 2025-09-19 11:32:21 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:21.072850 | orchestrator | 2025-09-19 11:32:21 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:32:21.074222 | orchestrator | 2025-09-19 11:32:21 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:21.075201 | orchestrator | 2025-09-19 11:32:21 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:21.076464 | orchestrator | 2025-09-19 11:32:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:24.168176 | orchestrator | 2025-09-19 11:32:24 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:24.168391 | orchestrator | 2025-09-19 11:32:24 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:24.168716 | orchestrator | 2025-09-19 11:32:24 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:32:24.169462 | orchestrator | 2025-09-19 11:32:24 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:24.170153 | orchestrator | 2025-09-19 11:32:24 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:24.170402 | orchestrator | 2025-09-19 11:32:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:27.217534 | orchestrator | 2025-09-19 11:32:27 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:27.217937 | orchestrator | 2025-09-19 11:32:27 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:27.218738 | orchestrator | 2025-09-19 11:32:27 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:32:27.219728 | orchestrator | 2025-09-19 11:32:27 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:27.220339 | orchestrator | 2025-09-19 11:32:27 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:27.220442 | orchestrator | 2025-09-19 11:32:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:30.329848 | orchestrator | 2025-09-19 11:32:30 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:30.423082 | orchestrator | 2025-09-19 11:32:30 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:30.423330 | orchestrator | 2025-09-19 11:32:30 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:32:30.423914 | orchestrator | 2025-09-19 11:32:30 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:30.424961 | orchestrator | 2025-09-19 11:32:30 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:30.424983 | orchestrator | 2025-09-19 11:32:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:33.484304 | orchestrator | 2025-09-19 11:32:33 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:33.484377 | orchestrator | 2025-09-19 11:32:33 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:33.484389 | orchestrator | 2025-09-19 11:32:33 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:32:33.484399 | orchestrator | 2025-09-19 11:32:33 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:33.484408 | orchestrator | 2025-09-19 11:32:33 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:33.484417 | orchestrator | 2025-09-19 11:32:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:36.608087 | orchestrator | 2025-09-19 11:32:36 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:36.610456 | orchestrator | 2025-09-19 11:32:36 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:36.614287 | orchestrator | 2025-09-19 11:32:36 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:32:36.616012 | orchestrator | 2025-09-19 11:32:36 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:36.617682 | orchestrator | 2025-09-19 11:32:36 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:36.617725 | orchestrator | 2025-09-19 11:32:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:39.653900 | orchestrator | 2025-09-19 11:32:39 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:39.654172 | orchestrator | 2025-09-19 11:32:39 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:39.654452 | orchestrator | 2025-09-19 11:32:39 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:32:39.655697 | orchestrator | 2025-09-19 11:32:39 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:39.656083 | orchestrator | 2025-09-19 11:32:39 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:39.656209 | orchestrator | 2025-09-19 11:32:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:42.693960 | orchestrator | 2025-09-19 11:32:42 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:42.694161 | orchestrator | 2025-09-19 11:32:42 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:42.695730 | orchestrator | 2025-09-19 11:32:42 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:32:42.696037 | orchestrator | 2025-09-19 11:32:42 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:42.696799 | orchestrator | 2025-09-19 11:32:42 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:42.697380 | orchestrator | 2025-09-19 11:32:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:45.754683 | orchestrator | 2025-09-19 11:32:45 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:45.754799 | orchestrator | 2025-09-19 11:32:45 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:45.755463 | orchestrator | 2025-09-19 11:32:45 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:32:45.756209 | orchestrator | 2025-09-19 11:32:45 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:45.756591 | orchestrator | 2025-09-19 11:32:45 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:45.756620 | orchestrator | 2025-09-19 11:32:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:48.866269 | orchestrator | 2025-09-19 11:32:48 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:48.868215 | orchestrator | 2025-09-19 11:32:48 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:48.871307 | orchestrator | 2025-09-19 11:32:48 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:32:48.875034 | orchestrator | 2025-09-19 11:32:48 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:48.876264 | orchestrator | 2025-09-19 11:32:48 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:48.876340 | orchestrator | 2025-09-19 11:32:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:51.933313 | orchestrator | 2025-09-19 11:32:51 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:51.933413 | orchestrator | 2025-09-19 11:32:51 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:51.933427 | orchestrator | 2025-09-19 11:32:51 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:32:51.933993 | orchestrator | 2025-09-19 11:32:51 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:51.934920 | orchestrator | 2025-09-19 11:32:51 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:51.935018 | orchestrator | 2025-09-19 11:32:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:54.972658 | orchestrator | 2025-09-19 11:32:54 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:54.973142 | orchestrator | 2025-09-19 11:32:54 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:54.973814 | orchestrator | 2025-09-19 11:32:54 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:32:54.975876 | orchestrator | 2025-09-19 11:32:54 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:54.976737 | orchestrator | 2025-09-19 11:32:54 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:54.976761 | orchestrator | 2025-09-19 11:32:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:58.232226 | orchestrator | 2025-09-19 11:32:58 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:32:58.232292 | orchestrator | 2025-09-19 11:32:58 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:32:58.232858 | orchestrator | 2025-09-19 11:32:58 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:32:58.233355 | orchestrator | 2025-09-19 11:32:58 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:32:58.234158 | orchestrator | 2025-09-19 11:32:58 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:32:58.234187 | orchestrator | 2025-09-19 11:32:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:01.325816 | orchestrator | 2025-09-19 11:33:01 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:01.325874 | orchestrator | 2025-09-19 11:33:01 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:01.325881 | orchestrator | 2025-09-19 11:33:01 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:01.325886 | orchestrator | 2025-09-19 11:33:01 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:33:01.325892 | orchestrator | 2025-09-19 11:33:01 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:33:01.325897 | orchestrator | 2025-09-19 11:33:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:04.355567 | orchestrator | 2025-09-19 11:33:04 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:04.355846 | orchestrator | 2025-09-19 11:33:04 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:04.356558 | orchestrator | 2025-09-19 11:33:04 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:04.357157 | orchestrator | 2025-09-19 11:33:04 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:33:04.357721 | orchestrator | 2025-09-19 11:33:04 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state STARTED 2025-09-19 11:33:04.357839 | orchestrator | 2025-09-19 11:33:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:07.687620 | orchestrator | 2025-09-19 11:33:07 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:07.687707 | orchestrator | 2025-09-19 11:33:07 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:07.691678 | orchestrator | 2025-09-19 11:33:07 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:07.695185 | orchestrator | 2025-09-19 11:33:07 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state STARTED 2025-09-19 11:33:07.698514 | orchestrator | 2025-09-19 11:33:07 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:07.701298 | orchestrator | 2025-09-19 11:33:07 | INFO  | Task 0252c4ab-2484-4c98-b210-4373001950cb is in state SUCCESS 2025-09-19 11:33:07.701319 | orchestrator | 2025-09-19 11:33:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:07.702691 | orchestrator | 2025-09-19 11:33:07.702720 | orchestrator | 2025-09-19 11:33:07.702732 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:33:07.702743 | orchestrator | 2025-09-19 11:33:07.702754 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:33:07.702766 | orchestrator | Friday 19 September 2025 11:31:54 +0000 (0:00:00.343) 0:00:00.343 ****** 2025-09-19 11:33:07.702801 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:07.702813 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:07.702824 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:07.702835 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:33:07.702845 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:33:07.702856 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:33:07.702866 | orchestrator | 2025-09-19 11:33:07.702877 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:33:07.702888 | orchestrator | Friday 19 September 2025 11:31:55 +0000 (0:00:01.007) 0:00:01.350 ****** 2025-09-19 11:33:07.702898 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 11:33:07.702909 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 11:33:07.702948 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 11:33:07.702960 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 11:33:07.702970 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 11:33:07.702981 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 11:33:07.702991 | orchestrator | 2025-09-19 11:33:07.703002 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-19 11:33:07.703013 | orchestrator | 2025-09-19 11:33:07.703024 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-19 11:33:07.703035 | orchestrator | Friday 19 September 2025 11:31:57 +0000 (0:00:01.122) 0:00:02.473 ****** 2025-09-19 11:33:07.703047 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:33:07.703058 | orchestrator | 2025-09-19 11:33:07.703069 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 11:33:07.703080 | orchestrator | Friday 19 September 2025 11:31:58 +0000 (0:00:01.428) 0:00:03.901 ****** 2025-09-19 11:33:07.703091 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-19 11:33:07.703102 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-19 11:33:07.703113 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-19 11:33:07.703123 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-19 11:33:07.703134 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-19 11:33:07.703145 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-19 11:33:07.703155 | orchestrator | 2025-09-19 11:33:07.703166 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 11:33:07.703177 | orchestrator | Friday 19 September 2025 11:31:59 +0000 (0:00:01.413) 0:00:05.315 ****** 2025-09-19 11:33:07.703188 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-19 11:33:07.703198 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-19 11:33:07.703209 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-19 11:33:07.703220 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-19 11:33:07.703230 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-19 11:33:07.703241 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-19 11:33:07.703251 | orchestrator | 2025-09-19 11:33:07.703262 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 11:33:07.703273 | orchestrator | Friday 19 September 2025 11:32:01 +0000 (0:00:01.716) 0:00:07.031 ****** 2025-09-19 11:33:07.703294 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-19 11:33:07.703307 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:07.703320 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-19 11:33:07.703332 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:07.703343 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-19 11:33:07.703364 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:07.703377 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-19 11:33:07.703389 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:07.703400 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-19 11:33:07.703413 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:07.703424 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-19 11:33:07.703436 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:07.703448 | orchestrator | 2025-09-19 11:33:07.703460 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-19 11:33:07.703473 | orchestrator | Friday 19 September 2025 11:32:03 +0000 (0:00:01.473) 0:00:08.504 ****** 2025-09-19 11:33:07.703484 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:07.703496 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:07.703509 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:07.703521 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:07.703532 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:07.703544 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:07.703556 | orchestrator | 2025-09-19 11:33:07.703568 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-19 11:33:07.703580 | orchestrator | Friday 19 September 2025 11:32:03 +0000 (0:00:00.600) 0:00:09.105 ****** 2025-09-19 11:33:07.703607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703653 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703678 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703720 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703743 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703772 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703790 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703801 | orchestrator | 2025-09-19 11:33:07.703813 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-19 11:33:07.703824 | orchestrator | Friday 19 September 2025 11:32:05 +0000 (0:00:01.651) 0:00:10.756 ****** 2025-09-19 11:33:07.703835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703883 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703895 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703937 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.703996 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.704008 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.704026 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.704038 | orchestrator | 2025-09-19 11:33:07.704049 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-19 11:33:07.704060 | orchestrator | Friday 19 September 2025 11:32:08 +0000 (0:00:02.911) 0:00:13.668 ****** 2025-09-19 11:33:07.704071 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:07.704082 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:07.704093 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:07.704104 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:07.704114 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:07.704125 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:07.704136 | orchestrator | 2025-09-19 11:33:07.704147 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-19 11:33:07.704158 | orchestrator | Friday 19 September 2025 11:32:11 +0000 (0:00:03.067) 0:00:16.736 ****** 2025-09-19 11:33:07.704169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.704187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.704202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.704214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.704232 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.704243 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.704262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.704273 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:33:07.704289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.704300 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.704318 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.704330 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:33:07.704347 | orchestrator | 2025-09-19 11:33:07.704358 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 11:33:07.704369 | orchestrator | Friday 19 September 2025 11:32:13 +0000 (0:00:02.544) 0:00:19.280 ****** 2025-09-19 11:33:07.704380 | orchestrator | 2025-09-19 11:33:07.704391 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 11:33:07.704402 | orchestrator | Friday 19 September 2025 11:32:14 +0000 (0:00:00.587) 0:00:19.867 ****** 2025-09-19 11:33:07.704413 | orchestrator | 2025-09-19 11:33:07.704423 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 11:33:07.704434 | orchestrator | Friday 19 September 2025 11:32:14 +0000 (0:00:00.382) 0:00:20.250 ****** 2025-09-19 11:33:07.704445 | orchestrator | 2025-09-19 11:33:07.704455 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 11:33:07.704466 | orchestrator | Friday 19 September 2025 11:32:14 +0000 (0:00:00.129) 0:00:20.379 ****** 2025-09-19 11:33:07.704477 | orchestrator | 2025-09-19 11:33:07.704488 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 11:33:07.704498 | orchestrator | Friday 19 September 2025 11:32:15 +0000 (0:00:00.152) 0:00:20.532 ****** 2025-09-19 11:33:07.704509 | orchestrator | 2025-09-19 11:33:07.704520 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 11:33:07.704530 | orchestrator | Friday 19 September 2025 11:32:15 +0000 (0:00:00.248) 0:00:20.780 ****** 2025-09-19 11:33:07.704541 | orchestrator | 2025-09-19 11:33:07.704552 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-19 11:33:07.704562 | orchestrator | Friday 19 September 2025 11:32:15 +0000 (0:00:00.170) 0:00:20.950 ****** 2025-09-19 11:33:07.704573 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:07.704584 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:07.704595 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:33:07.704606 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:33:07.704616 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:07.704627 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:33:07.704638 | orchestrator | 2025-09-19 11:33:07.704649 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-19 11:33:07.704659 | orchestrator | Friday 19 September 2025 11:32:26 +0000 (0:00:10.484) 0:00:31.435 ****** 2025-09-19 11:33:07.704670 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:07.704686 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:07.704696 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:33:07.704707 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:07.704718 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:33:07.704729 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:33:07.704739 | orchestrator | 2025-09-19 11:33:07.704750 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-19 11:33:07.704761 | orchestrator | Friday 19 September 2025 11:32:27 +0000 (0:00:01.759) 0:00:33.194 ****** 2025-09-19 11:33:07.704772 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:07.704783 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:33:07.704794 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:33:07.704805 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:07.704815 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:07.704826 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:33:07.704837 | orchestrator | 2025-09-19 11:33:07.704847 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-19 11:33:07.704858 | orchestrator | Friday 19 September 2025 11:32:38 +0000 (0:00:10.359) 0:00:43.554 ****** 2025-09-19 11:33:07.704869 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-19 11:33:07.704880 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-19 11:33:07.704897 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-19 11:33:07.704908 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-19 11:33:07.704932 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-19 11:33:07.704949 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-19 11:33:07.704960 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-19 11:33:07.704971 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-19 11:33:07.704982 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-19 11:33:07.704992 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-19 11:33:07.705003 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-19 11:33:07.705014 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-19 11:33:07.705024 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 11:33:07.705035 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 11:33:07.705046 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 11:33:07.705056 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 11:33:07.705067 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 11:33:07.705077 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 11:33:07.705088 | orchestrator | 2025-09-19 11:33:07.705099 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-19 11:33:07.705110 | orchestrator | Friday 19 September 2025 11:32:46 +0000 (0:00:08.262) 0:00:51.816 ****** 2025-09-19 11:33:07.705121 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-19 11:33:07.705132 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:07.705142 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-19 11:33:07.705153 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:07.705164 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-19 11:33:07.705175 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:07.705186 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-19 11:33:07.705197 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-19 11:33:07.705207 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-19 11:33:07.705218 | orchestrator | 2025-09-19 11:33:07.705229 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-19 11:33:07.705240 | orchestrator | Friday 19 September 2025 11:32:49 +0000 (0:00:03.090) 0:00:54.907 ****** 2025-09-19 11:33:07.705251 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-19 11:33:07.705261 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:07.705272 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-19 11:33:07.705283 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:07.705294 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-19 11:33:07.705311 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:07.705322 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-19 11:33:07.705333 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-19 11:33:07.705348 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-19 11:33:07.705359 | orchestrator | 2025-09-19 11:33:07.705370 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-19 11:33:07.705381 | orchestrator | Friday 19 September 2025 11:32:53 +0000 (0:00:04.164) 0:00:59.071 ****** 2025-09-19 11:33:07.705391 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:07.705402 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:07.705413 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:07.705424 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:33:07.705435 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:33:07.705445 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:33:07.705456 | orchestrator | 2025-09-19 11:33:07.705467 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:33:07.705478 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 11:33:07.705489 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 11:33:07.705500 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 11:33:07.705511 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:33:07.705522 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:33:07.705538 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:33:07.705549 | orchestrator | 2025-09-19 11:33:07.705560 | orchestrator | 2025-09-19 11:33:07.705571 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:33:07.705582 | orchestrator | Friday 19 September 2025 11:33:04 +0000 (0:00:10.664) 0:01:09.736 ****** 2025-09-19 11:33:07.705592 | orchestrator | =============================================================================== 2025-09-19 11:33:07.705603 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 21.02s 2025-09-19 11:33:07.705614 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.48s 2025-09-19 11:33:07.705624 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.26s 2025-09-19 11:33:07.705635 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.16s 2025-09-19 11:33:07.705646 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.09s 2025-09-19 11:33:07.705656 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 3.07s 2025-09-19 11:33:07.705667 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.91s 2025-09-19 11:33:07.705678 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.54s 2025-09-19 11:33:07.705688 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.76s 2025-09-19 11:33:07.705699 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.72s 2025-09-19 11:33:07.705710 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.67s 2025-09-19 11:33:07.705720 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.65s 2025-09-19 11:33:07.705731 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.47s 2025-09-19 11:33:07.705747 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.43s 2025-09-19 11:33:07.705758 | orchestrator | module-load : Load modules ---------------------------------------------- 1.41s 2025-09-19 11:33:07.705769 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.12s 2025-09-19 11:33:07.705779 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.01s 2025-09-19 11:33:07.705790 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.60s 2025-09-19 11:33:10.733437 | orchestrator | 2025-09-19 11:33:10 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:10.736878 | orchestrator | 2025-09-19 11:33:10 | INFO  | Task 8bc9f70c-6b1a-44e7-8e9c-920a3da8946f is in state STARTED 2025-09-19 11:33:10.737701 | orchestrator | 2025-09-19 11:33:10 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:10.738622 | orchestrator | 2025-09-19 11:33:10 | INFO  | Task 84ab3860-c70a-4615-a092-3085bd9430e1 is in state STARTED 2025-09-19 11:33:10.739233 | orchestrator | 2025-09-19 11:33:10 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:10.740658 | orchestrator | 2025-09-19 11:33:10 | INFO  | Task 35d7a2bc-37b2-461a-a72b-11ee9e0435cf is in state SUCCESS 2025-09-19 11:33:10.742141 | orchestrator | 2025-09-19 11:33:10.742170 | orchestrator | 2025-09-19 11:33:10.742178 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-19 11:33:10.742186 | orchestrator | 2025-09-19 11:33:10.742194 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-19 11:33:10.742201 | orchestrator | Friday 19 September 2025 11:29:38 +0000 (0:00:00.202) 0:00:00.202 ****** 2025-09-19 11:33:10.742208 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:33:10.742216 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:33:10.742223 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:33:10.742230 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.742237 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.742244 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.742250 | orchestrator | 2025-09-19 11:33:10.742271 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-19 11:33:10.742278 | orchestrator | Friday 19 September 2025 11:29:39 +0000 (0:00:00.682) 0:00:00.885 ****** 2025-09-19 11:33:10.742285 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:10.742292 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:10.742300 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:10.742307 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.742314 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.742321 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.742327 | orchestrator | 2025-09-19 11:33:10.742335 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-19 11:33:10.742342 | orchestrator | Friday 19 September 2025 11:29:40 +0000 (0:00:00.635) 0:00:01.520 ****** 2025-09-19 11:33:10.742349 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:10.742355 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:10.742362 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:10.742369 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.742376 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.742383 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.742390 | orchestrator | 2025-09-19 11:33:10.742397 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-19 11:33:10.742404 | orchestrator | Friday 19 September 2025 11:29:40 +0000 (0:00:00.632) 0:00:02.153 ****** 2025-09-19 11:33:10.742411 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:33:10.742418 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:33:10.742425 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:33:10.742432 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.742439 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:10.742459 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:10.742466 | orchestrator | 2025-09-19 11:33:10.742473 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-19 11:33:10.742480 | orchestrator | Friday 19 September 2025 11:29:43 +0000 (0:00:02.833) 0:00:04.987 ****** 2025-09-19 11:33:10.742487 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:33:10.742494 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:33:10.742501 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:33:10.742508 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.742515 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:10.742521 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:10.742528 | orchestrator | 2025-09-19 11:33:10.742535 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-19 11:33:10.742542 | orchestrator | Friday 19 September 2025 11:29:44 +0000 (0:00:01.041) 0:00:06.028 ****** 2025-09-19 11:33:10.742549 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:33:10.742556 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:33:10.742563 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:33:10.742570 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.742577 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:10.742584 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:10.742590 | orchestrator | 2025-09-19 11:33:10.742597 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-19 11:33:10.742604 | orchestrator | Friday 19 September 2025 11:29:46 +0000 (0:00:02.151) 0:00:08.180 ****** 2025-09-19 11:33:10.742611 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:10.742618 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:10.742625 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:10.742632 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.742639 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.742646 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.742653 | orchestrator | 2025-09-19 11:33:10.742660 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-19 11:33:10.742667 | orchestrator | Friday 19 September 2025 11:29:47 +0000 (0:00:00.895) 0:00:09.075 ****** 2025-09-19 11:33:10.742674 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:10.742681 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:10.742688 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:10.742695 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.742701 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.742708 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.742715 | orchestrator | 2025-09-19 11:33:10.742722 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-19 11:33:10.742729 | orchestrator | Friday 19 September 2025 11:29:48 +0000 (0:00:00.965) 0:00:10.041 ****** 2025-09-19 11:33:10.742736 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:33:10.742743 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:33:10.742750 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:10.742757 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:33:10.742764 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:33:10.742770 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:10.742777 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:33:10.742784 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:33:10.742791 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:10.742798 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:33:10.742813 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:33:10.742821 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.742834 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:33:10.742841 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:33:10.742850 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.742857 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:33:10.742864 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:33:10.742871 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.742878 | orchestrator | 2025-09-19 11:33:10.742885 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-19 11:33:10.742892 | orchestrator | Friday 19 September 2025 11:29:49 +0000 (0:00:00.538) 0:00:10.580 ****** 2025-09-19 11:33:10.742899 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:10.742906 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:10.742935 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:10.742944 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.742951 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.742958 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.742965 | orchestrator | 2025-09-19 11:33:10.742972 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-19 11:33:10.742979 | orchestrator | Friday 19 September 2025 11:29:50 +0000 (0:00:01.137) 0:00:11.718 ****** 2025-09-19 11:33:10.742986 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:33:10.742993 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:33:10.743000 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:33:10.743007 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.743014 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.743021 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.743028 | orchestrator | 2025-09-19 11:33:10.743035 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-19 11:33:10.743042 | orchestrator | Friday 19 September 2025 11:29:51 +0000 (0:00:00.923) 0:00:12.644 ****** 2025-09-19 11:33:10.743048 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:33:10.743055 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:33:10.743062 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:33:10.743069 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:10.743076 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:10.743083 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.743090 | orchestrator | 2025-09-19 11:33:10.743097 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-19 11:33:10.743104 | orchestrator | Friday 19 September 2025 11:29:56 +0000 (0:00:05.590) 0:00:18.235 ****** 2025-09-19 11:33:10.743111 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:10.743117 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:10.743124 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:10.743131 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.743138 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.743145 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.743152 | orchestrator | 2025-09-19 11:33:10.743159 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-19 11:33:10.743165 | orchestrator | Friday 19 September 2025 11:29:58 +0000 (0:00:01.584) 0:00:19.819 ****** 2025-09-19 11:33:10.743171 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:10.743178 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:10.743185 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:10.743192 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.743199 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.743206 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.743212 | orchestrator | 2025-09-19 11:33:10.743220 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-19 11:33:10.743227 | orchestrator | Friday 19 September 2025 11:30:00 +0000 (0:00:02.610) 0:00:22.429 ****** 2025-09-19 11:33:10.743239 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:33:10.743246 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:33:10.743253 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:33:10.743260 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.743267 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.743273 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.743280 | orchestrator | 2025-09-19 11:33:10.743287 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-19 11:33:10.743294 | orchestrator | Friday 19 September 2025 11:30:01 +0000 (0:00:00.721) 0:00:23.151 ****** 2025-09-19 11:33:10.743301 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-19 11:33:10.743308 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-19 11:33:10.743315 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-19 11:33:10.743322 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-19 11:33:10.743329 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-19 11:33:10.743336 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-19 11:33:10.743343 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-19 11:33:10.743350 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-19 11:33:10.743357 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-19 11:33:10.743364 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-19 11:33:10.743371 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-19 11:33:10.743377 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-19 11:33:10.743384 | orchestrator | 2025-09-19 11:33:10.743391 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-19 11:33:10.743398 | orchestrator | Friday 19 September 2025 11:30:03 +0000 (0:00:01.985) 0:00:25.136 ****** 2025-09-19 11:33:10.743405 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:33:10.743412 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:33:10.743419 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:33:10.743426 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.743432 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:10.743439 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:10.743446 | orchestrator | 2025-09-19 11:33:10.743457 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-19 11:33:10.743464 | orchestrator | 2025-09-19 11:33:10.743471 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-19 11:33:10.743478 | orchestrator | Friday 19 September 2025 11:30:05 +0000 (0:00:01.716) 0:00:26.853 ****** 2025-09-19 11:33:10.743488 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.743495 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.743502 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.743509 | orchestrator | 2025-09-19 11:33:10.743516 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-19 11:33:10.743523 | orchestrator | Friday 19 September 2025 11:30:06 +0000 (0:00:01.292) 0:00:28.145 ****** 2025-09-19 11:33:10.743530 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.743537 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.743544 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.743551 | orchestrator | 2025-09-19 11:33:10.743557 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-19 11:33:10.743564 | orchestrator | Friday 19 September 2025 11:30:07 +0000 (0:00:00.993) 0:00:29.139 ****** 2025-09-19 11:33:10.743571 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.743578 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.743585 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.743592 | orchestrator | 2025-09-19 11:33:10.743599 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-19 11:33:10.743606 | orchestrator | Friday 19 September 2025 11:30:08 +0000 (0:00:00.875) 0:00:30.015 ****** 2025-09-19 11:33:10.743613 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.743623 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.743630 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.743637 | orchestrator | 2025-09-19 11:33:10.743644 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-19 11:33:10.743651 | orchestrator | Friday 19 September 2025 11:30:09 +0000 (0:00:01.368) 0:00:31.383 ****** 2025-09-19 11:33:10.743658 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.743664 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.743671 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.743678 | orchestrator | 2025-09-19 11:33:10.743685 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-19 11:33:10.743692 | orchestrator | Friday 19 September 2025 11:30:10 +0000 (0:00:00.727) 0:00:32.110 ****** 2025-09-19 11:33:10.743699 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.743706 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.743713 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.743720 | orchestrator | 2025-09-19 11:33:10.743727 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-19 11:33:10.743734 | orchestrator | Friday 19 September 2025 11:30:11 +0000 (0:00:01.167) 0:00:33.278 ****** 2025-09-19 11:33:10.743740 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:10.743747 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:10.743754 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.743761 | orchestrator | 2025-09-19 11:33:10.743768 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-19 11:33:10.743775 | orchestrator | Friday 19 September 2025 11:30:13 +0000 (0:00:02.021) 0:00:35.299 ****** 2025-09-19 11:33:10.743782 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:10.743789 | orchestrator | 2025-09-19 11:33:10.743796 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-19 11:33:10.743803 | orchestrator | Friday 19 September 2025 11:30:14 +0000 (0:00:00.777) 0:00:36.077 ****** 2025-09-19 11:33:10.743810 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.743816 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.743823 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.743830 | orchestrator | 2025-09-19 11:33:10.743837 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-19 11:33:10.743844 | orchestrator | Friday 19 September 2025 11:30:17 +0000 (0:00:02.971) 0:00:39.048 ****** 2025-09-19 11:33:10.743851 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.743858 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.743865 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.743872 | orchestrator | 2025-09-19 11:33:10.743879 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-19 11:33:10.743886 | orchestrator | Friday 19 September 2025 11:30:18 +0000 (0:00:01.044) 0:00:40.092 ****** 2025-09-19 11:33:10.743892 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.743899 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.743906 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.743924 | orchestrator | 2025-09-19 11:33:10.743931 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-19 11:33:10.743937 | orchestrator | Friday 19 September 2025 11:30:20 +0000 (0:00:01.418) 0:00:41.511 ****** 2025-09-19 11:33:10.743944 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.743951 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.743958 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.743965 | orchestrator | 2025-09-19 11:33:10.743972 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-19 11:33:10.743979 | orchestrator | Friday 19 September 2025 11:30:21 +0000 (0:00:01.454) 0:00:42.966 ****** 2025-09-19 11:33:10.743986 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.743993 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.743999 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.744011 | orchestrator | 2025-09-19 11:33:10.744018 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-19 11:33:10.744025 | orchestrator | Friday 19 September 2025 11:30:21 +0000 (0:00:00.510) 0:00:43.477 ****** 2025-09-19 11:33:10.744032 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.744039 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.744045 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.744052 | orchestrator | 2025-09-19 11:33:10.744059 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-19 11:33:10.744066 | orchestrator | Friday 19 September 2025 11:30:22 +0000 (0:00:00.648) 0:00:44.125 ****** 2025-09-19 11:33:10.744073 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.744080 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:10.744087 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:10.744094 | orchestrator | 2025-09-19 11:33:10.744104 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-19 11:33:10.744112 | orchestrator | Friday 19 September 2025 11:30:25 +0000 (0:00:02.652) 0:00:46.778 ****** 2025-09-19 11:33:10.744121 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 11:33:10.744129 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 11:33:10.744136 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 11:33:10.744143 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 11:33:10.744150 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 11:33:10.744157 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 11:33:10.744163 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 11:33:10.744170 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 11:33:10.744176 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 11:33:10.744183 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 11:33:10.744190 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 11:33:10.744197 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 11:33:10.744204 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.744211 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.744218 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.744225 | orchestrator | 2025-09-19 11:33:10.744232 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-19 11:33:10.744239 | orchestrator | Friday 19 September 2025 11:31:10 +0000 (0:00:45.099) 0:01:31.877 ****** 2025-09-19 11:33:10.744245 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.744252 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.744259 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.744266 | orchestrator | 2025-09-19 11:33:10.744273 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-19 11:33:10.744280 | orchestrator | Friday 19 September 2025 11:31:10 +0000 (0:00:00.517) 0:01:32.394 ****** 2025-09-19 11:33:10.744291 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.744298 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:10.744305 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:10.744312 | orchestrator | 2025-09-19 11:33:10.744319 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-19 11:33:10.744326 | orchestrator | Friday 19 September 2025 11:31:12 +0000 (0:00:01.208) 0:01:33.603 ****** 2025-09-19 11:33:10.744333 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.744340 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:10.744346 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:10.744353 | orchestrator | 2025-09-19 11:33:10.744360 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-19 11:33:10.744367 | orchestrator | Friday 19 September 2025 11:31:13 +0000 (0:00:01.206) 0:01:34.809 ****** 2025-09-19 11:33:10.744374 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:10.744381 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:10.744388 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.744395 | orchestrator | 2025-09-19 11:33:10.744402 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-19 11:33:10.744409 | orchestrator | Friday 19 September 2025 11:31:36 +0000 (0:00:23.549) 0:01:58.358 ****** 2025-09-19 11:33:10.744416 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.744423 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.744429 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.744436 | orchestrator | 2025-09-19 11:33:10.744443 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-19 11:33:10.744450 | orchestrator | Friday 19 September 2025 11:31:37 +0000 (0:00:00.608) 0:01:58.967 ****** 2025-09-19 11:33:10.744457 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.744464 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.744471 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.744478 | orchestrator | 2025-09-19 11:33:10.744484 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-19 11:33:10.744492 | orchestrator | Friday 19 September 2025 11:31:38 +0000 (0:00:00.591) 0:01:59.558 ****** 2025-09-19 11:33:10.744498 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:10.744505 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.744512 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:10.744519 | orchestrator | 2025-09-19 11:33:10.744526 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-19 11:33:10.744533 | orchestrator | Friday 19 September 2025 11:31:38 +0000 (0:00:00.621) 0:02:00.180 ****** 2025-09-19 11:33:10.744540 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.744550 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.744558 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.744564 | orchestrator | 2025-09-19 11:33:10.744571 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-19 11:33:10.744578 | orchestrator | Friday 19 September 2025 11:31:39 +0000 (0:00:00.807) 0:02:00.988 ****** 2025-09-19 11:33:10.744588 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.744595 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.744602 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.744609 | orchestrator | 2025-09-19 11:33:10.744616 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-19 11:33:10.744623 | orchestrator | Friday 19 September 2025 11:31:39 +0000 (0:00:00.308) 0:02:01.296 ****** 2025-09-19 11:33:10.744630 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.744637 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:10.744644 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:10.744651 | orchestrator | 2025-09-19 11:33:10.744658 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-19 11:33:10.744665 | orchestrator | Friday 19 September 2025 11:31:40 +0000 (0:00:00.685) 0:02:01.982 ****** 2025-09-19 11:33:10.744672 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.744683 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:10.744690 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:10.744697 | orchestrator | 2025-09-19 11:33:10.744704 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-19 11:33:10.744711 | orchestrator | Friday 19 September 2025 11:31:41 +0000 (0:00:00.735) 0:02:02.717 ****** 2025-09-19 11:33:10.744718 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.744725 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:10.744732 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:10.744739 | orchestrator | 2025-09-19 11:33:10.744746 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-19 11:33:10.744753 | orchestrator | Friday 19 September 2025 11:31:42 +0000 (0:00:01.138) 0:02:03.855 ****** 2025-09-19 11:33:10.744760 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:10.744767 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:10.744774 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:10.744781 | orchestrator | 2025-09-19 11:33:10.744788 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-19 11:33:10.744795 | orchestrator | Friday 19 September 2025 11:31:43 +0000 (0:00:00.850) 0:02:04.706 ****** 2025-09-19 11:33:10.744802 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.744809 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.744816 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.744823 | orchestrator | 2025-09-19 11:33:10.744830 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-19 11:33:10.744836 | orchestrator | Friday 19 September 2025 11:31:43 +0000 (0:00:00.337) 0:02:05.044 ****** 2025-09-19 11:33:10.744843 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.744850 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.744857 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.744864 | orchestrator | 2025-09-19 11:33:10.744871 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-19 11:33:10.744878 | orchestrator | Friday 19 September 2025 11:31:43 +0000 (0:00:00.318) 0:02:05.362 ****** 2025-09-19 11:33:10.744885 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.744892 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.744899 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.744906 | orchestrator | 2025-09-19 11:33:10.744922 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-19 11:33:10.744930 | orchestrator | Friday 19 September 2025 11:31:44 +0000 (0:00:00.856) 0:02:06.219 ****** 2025-09-19 11:33:10.744937 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.744943 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.744950 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.744957 | orchestrator | 2025-09-19 11:33:10.744964 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-19 11:33:10.744971 | orchestrator | Friday 19 September 2025 11:31:45 +0000 (0:00:00.626) 0:02:06.845 ****** 2025-09-19 11:33:10.744979 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 11:33:10.744986 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 11:33:10.744993 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 11:33:10.745000 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 11:33:10.745007 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 11:33:10.745014 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 11:33:10.745021 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 11:33:10.745028 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 11:33:10.745040 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 11:33:10.745047 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-19 11:33:10.745054 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 11:33:10.745061 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 11:33:10.745068 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-19 11:33:10.745078 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 11:33:10.745086 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 11:33:10.745092 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 11:33:10.745102 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 11:33:10.745109 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 11:33:10.745116 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 11:33:10.745123 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 11:33:10.745130 | orchestrator | 2025-09-19 11:33:10.745137 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-19 11:33:10.745144 | orchestrator | 2025-09-19 11:33:10.745151 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-19 11:33:10.745158 | orchestrator | Friday 19 September 2025 11:31:48 +0000 (0:00:02.943) 0:02:09.788 ****** 2025-09-19 11:33:10.745164 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:33:10.745170 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:33:10.745177 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:33:10.745183 | orchestrator | 2025-09-19 11:33:10.745191 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-19 11:33:10.745197 | orchestrator | Friday 19 September 2025 11:31:48 +0000 (0:00:00.575) 0:02:10.364 ****** 2025-09-19 11:33:10.745204 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:33:10.745211 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:33:10.745218 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:33:10.745225 | orchestrator | 2025-09-19 11:33:10.745232 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-19 11:33:10.745239 | orchestrator | Friday 19 September 2025 11:31:49 +0000 (0:00:00.734) 0:02:11.099 ****** 2025-09-19 11:33:10.745245 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:33:10.745252 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:33:10.745259 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:33:10.745266 | orchestrator | 2025-09-19 11:33:10.745273 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-19 11:33:10.745280 | orchestrator | Friday 19 September 2025 11:31:49 +0000 (0:00:00.359) 0:02:11.459 ****** 2025-09-19 11:33:10.745287 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:33:10.745294 | orchestrator | 2025-09-19 11:33:10.745301 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-19 11:33:10.745307 | orchestrator | Friday 19 September 2025 11:31:50 +0000 (0:00:00.676) 0:02:12.135 ****** 2025-09-19 11:33:10.745314 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:10.745321 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:10.745328 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:10.745335 | orchestrator | 2025-09-19 11:33:10.745342 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-19 11:33:10.745349 | orchestrator | Friday 19 September 2025 11:31:50 +0000 (0:00:00.343) 0:02:12.479 ****** 2025-09-19 11:33:10.745363 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:10.745370 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:10.745377 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:10.745384 | orchestrator | 2025-09-19 11:33:10.745391 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-19 11:33:10.745398 | orchestrator | Friday 19 September 2025 11:31:51 +0000 (0:00:00.378) 0:02:12.858 ****** 2025-09-19 11:33:10.745405 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:10.745412 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:10.745419 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:10.745426 | orchestrator | 2025-09-19 11:33:10.745433 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-19 11:33:10.745440 | orchestrator | Friday 19 September 2025 11:31:51 +0000 (0:00:00.343) 0:02:13.201 ****** 2025-09-19 11:33:10.745447 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:33:10.745454 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:33:10.745460 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:33:10.745467 | orchestrator | 2025-09-19 11:33:10.745474 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-19 11:33:10.745481 | orchestrator | Friday 19 September 2025 11:31:52 +0000 (0:00:00.874) 0:02:14.076 ****** 2025-09-19 11:33:10.745488 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:33:10.745495 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:33:10.745502 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:33:10.745509 | orchestrator | 2025-09-19 11:33:10.745516 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-19 11:33:10.745523 | orchestrator | Friday 19 September 2025 11:31:53 +0000 (0:00:01.256) 0:02:15.332 ****** 2025-09-19 11:33:10.745529 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:33:10.745536 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:33:10.745543 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:33:10.745550 | orchestrator | 2025-09-19 11:33:10.745557 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-19 11:33:10.745564 | orchestrator | Friday 19 September 2025 11:31:55 +0000 (0:00:01.182) 0:02:16.515 ****** 2025-09-19 11:33:10.745571 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:33:10.745578 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:33:10.745585 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:33:10.745592 | orchestrator | 2025-09-19 11:33:10.745599 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-19 11:33:10.745606 | orchestrator | 2025-09-19 11:33:10.745613 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-19 11:33:10.745620 | orchestrator | Friday 19 September 2025 11:32:07 +0000 (0:00:12.330) 0:02:28.845 ****** 2025-09-19 11:33:10.745627 | orchestrator | ok: [testbed-manager] 2025-09-19 11:33:10.745634 | orchestrator | 2025-09-19 11:33:10.745641 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-19 11:33:10.745648 | orchestrator | Friday 19 September 2025 11:32:08 +0000 (0:00:00.774) 0:02:29.620 ****** 2025-09-19 11:33:10.745658 | orchestrator | changed: [testbed-manager] 2025-09-19 11:33:10.745665 | orchestrator | 2025-09-19 11:33:10.745672 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 11:33:10.745679 | orchestrator | Friday 19 September 2025 11:32:08 +0000 (0:00:00.441) 0:02:30.062 ****** 2025-09-19 11:33:10.745688 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 11:33:10.745696 | orchestrator | 2025-09-19 11:33:10.745702 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 11:33:10.745709 | orchestrator | Friday 19 September 2025 11:32:09 +0000 (0:00:00.577) 0:02:30.639 ****** 2025-09-19 11:33:10.745716 | orchestrator | changed: [testbed-manager] 2025-09-19 11:33:10.745723 | orchestrator | 2025-09-19 11:33:10.745730 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-19 11:33:10.745737 | orchestrator | Friday 19 September 2025 11:32:10 +0000 (0:00:01.250) 0:02:31.890 ****** 2025-09-19 11:33:10.745748 | orchestrator | changed: [testbed-manager] 2025-09-19 11:33:10.745756 | orchestrator | 2025-09-19 11:33:10.745763 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-19 11:33:10.745770 | orchestrator | Friday 19 September 2025 11:32:11 +0000 (0:00:00.730) 0:02:32.620 ****** 2025-09-19 11:33:10.745777 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 11:33:10.745783 | orchestrator | 2025-09-19 11:33:10.745790 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-19 11:33:10.745797 | orchestrator | Friday 19 September 2025 11:32:12 +0000 (0:00:01.579) 0:02:34.200 ****** 2025-09-19 11:33:10.745804 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 11:33:10.745811 | orchestrator | 2025-09-19 11:33:10.745818 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-19 11:33:10.745825 | orchestrator | Friday 19 September 2025 11:32:13 +0000 (0:00:00.666) 0:02:34.866 ****** 2025-09-19 11:33:10.745832 | orchestrator | changed: [testbed-manager] 2025-09-19 11:33:10.745839 | orchestrator | 2025-09-19 11:33:10.745846 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-19 11:33:10.745853 | orchestrator | Friday 19 September 2025 11:32:13 +0000 (0:00:00.326) 0:02:35.192 ****** 2025-09-19 11:33:10.745860 | orchestrator | changed: [testbed-manager] 2025-09-19 11:33:10.745867 | orchestrator | 2025-09-19 11:33:10.745874 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-19 11:33:10.745881 | orchestrator | 2025-09-19 11:33:10.745888 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-19 11:33:10.745894 | orchestrator | Friday 19 September 2025 11:32:14 +0000 (0:00:00.771) 0:02:35.963 ****** 2025-09-19 11:33:10.745901 | orchestrator | ok: [testbed-manager] 2025-09-19 11:33:10.745908 | orchestrator | 2025-09-19 11:33:10.745937 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-19 11:33:10.745944 | orchestrator | Friday 19 September 2025 11:32:14 +0000 (0:00:00.127) 0:02:36.091 ****** 2025-09-19 11:33:10.745951 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 11:33:10.745958 | orchestrator | 2025-09-19 11:33:10.745965 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-19 11:33:10.745972 | orchestrator | Friday 19 September 2025 11:32:14 +0000 (0:00:00.218) 0:02:36.310 ****** 2025-09-19 11:33:10.745979 | orchestrator | ok: [testbed-manager] 2025-09-19 11:33:10.745986 | orchestrator | 2025-09-19 11:33:10.745993 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-19 11:33:10.746000 | orchestrator | Friday 19 September 2025 11:32:15 +0000 (0:00:00.883) 0:02:37.193 ****** 2025-09-19 11:33:10.746007 | orchestrator | ok: [testbed-manager] 2025-09-19 11:33:10.746038 | orchestrator | 2025-09-19 11:33:10.746046 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-19 11:33:10.746053 | orchestrator | Friday 19 September 2025 11:32:17 +0000 (0:00:01.959) 0:02:39.153 ****** 2025-09-19 11:33:10.746060 | orchestrator | changed: [testbed-manager] 2025-09-19 11:33:10.746067 | orchestrator | 2025-09-19 11:33:10.746074 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-19 11:33:10.746081 | orchestrator | Friday 19 September 2025 11:32:18 +0000 (0:00:01.027) 0:02:40.181 ****** 2025-09-19 11:33:10.746088 | orchestrator | ok: [testbed-manager] 2025-09-19 11:33:10.746095 | orchestrator | 2025-09-19 11:33:10.746102 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-19 11:33:10.746109 | orchestrator | Friday 19 September 2025 11:32:19 +0000 (0:00:00.394) 0:02:40.575 ****** 2025-09-19 11:33:10.746116 | orchestrator | changed: [testbed-manager] 2025-09-19 11:33:10.746123 | orchestrator | 2025-09-19 11:33:10.746130 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-19 11:33:10.746137 | orchestrator | Friday 19 September 2025 11:32:25 +0000 (0:00:05.925) 0:02:46.500 ****** 2025-09-19 11:33:10.746144 | orchestrator | changed: [testbed-manager] 2025-09-19 11:33:10.746156 | orchestrator | 2025-09-19 11:33:10.746162 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-19 11:33:10.746169 | orchestrator | Friday 19 September 2025 11:32:36 +0000 (0:00:11.915) 0:02:58.416 ****** 2025-09-19 11:33:10.746175 | orchestrator | ok: [testbed-manager] 2025-09-19 11:33:10.746182 | orchestrator | 2025-09-19 11:33:10.746189 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-19 11:33:10.746196 | orchestrator | 2025-09-19 11:33:10.746202 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-19 11:33:10.746209 | orchestrator | Friday 19 September 2025 11:32:37 +0000 (0:00:00.674) 0:02:59.091 ****** 2025-09-19 11:33:10.746216 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.746223 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.746230 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.746237 | orchestrator | 2025-09-19 11:33:10.746244 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-19 11:33:10.746251 | orchestrator | Friday 19 September 2025 11:32:37 +0000 (0:00:00.320) 0:02:59.411 ****** 2025-09-19 11:33:10.746258 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746265 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.746272 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.746279 | orchestrator | 2025-09-19 11:33:10.746290 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-19 11:33:10.746297 | orchestrator | Friday 19 September 2025 11:32:38 +0000 (0:00:00.339) 0:02:59.751 ****** 2025-09-19 11:33:10.746308 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:10.746315 | orchestrator | 2025-09-19 11:33:10.746322 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-19 11:33:10.746329 | orchestrator | Friday 19 September 2025 11:32:39 +0000 (0:00:00.801) 0:03:00.552 ****** 2025-09-19 11:33:10.746336 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746343 | orchestrator | 2025-09-19 11:33:10.746350 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-19 11:33:10.746357 | orchestrator | Friday 19 September 2025 11:32:39 +0000 (0:00:00.175) 0:03:00.728 ****** 2025-09-19 11:33:10.746364 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746371 | orchestrator | 2025-09-19 11:33:10.746378 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-19 11:33:10.746385 | orchestrator | Friday 19 September 2025 11:32:39 +0000 (0:00:00.208) 0:03:00.937 ****** 2025-09-19 11:33:10.746392 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746399 | orchestrator | 2025-09-19 11:33:10.746406 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-19 11:33:10.746412 | orchestrator | Friday 19 September 2025 11:32:39 +0000 (0:00:00.171) 0:03:01.108 ****** 2025-09-19 11:33:10.746419 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746426 | orchestrator | 2025-09-19 11:33:10.746433 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-19 11:33:10.746440 | orchestrator | Friday 19 September 2025 11:32:39 +0000 (0:00:00.187) 0:03:01.296 ****** 2025-09-19 11:33:10.746447 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746454 | orchestrator | 2025-09-19 11:33:10.746461 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-19 11:33:10.746468 | orchestrator | Friday 19 September 2025 11:32:40 +0000 (0:00:00.220) 0:03:01.516 ****** 2025-09-19 11:33:10.746475 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746481 | orchestrator | 2025-09-19 11:33:10.746488 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-19 11:33:10.746495 | orchestrator | Friday 19 September 2025 11:32:40 +0000 (0:00:00.196) 0:03:01.713 ****** 2025-09-19 11:33:10.746502 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746509 | orchestrator | 2025-09-19 11:33:10.746516 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-19 11:33:10.746527 | orchestrator | Friday 19 September 2025 11:32:40 +0000 (0:00:00.212) 0:03:01.926 ****** 2025-09-19 11:33:10.746534 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746541 | orchestrator | 2025-09-19 11:33:10.746548 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-19 11:33:10.746555 | orchestrator | Friday 19 September 2025 11:32:40 +0000 (0:00:00.234) 0:03:02.160 ****** 2025-09-19 11:33:10.746562 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746569 | orchestrator | 2025-09-19 11:33:10.746576 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-19 11:33:10.746583 | orchestrator | Friday 19 September 2025 11:32:40 +0000 (0:00:00.218) 0:03:02.378 ****** 2025-09-19 11:33:10.746589 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-19 11:33:10.746596 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-19 11:33:10.746603 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746610 | orchestrator | 2025-09-19 11:33:10.746617 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-19 11:33:10.746624 | orchestrator | Friday 19 September 2025 11:32:41 +0000 (0:00:00.816) 0:03:03.195 ****** 2025-09-19 11:33:10.746631 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746638 | orchestrator | 2025-09-19 11:33:10.746645 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-19 11:33:10.746652 | orchestrator | Friday 19 September 2025 11:32:41 +0000 (0:00:00.157) 0:03:03.352 ****** 2025-09-19 11:33:10.746659 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746666 | orchestrator | 2025-09-19 11:33:10.746673 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-19 11:33:10.746680 | orchestrator | Friday 19 September 2025 11:32:42 +0000 (0:00:00.172) 0:03:03.524 ****** 2025-09-19 11:33:10.746687 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746694 | orchestrator | 2025-09-19 11:33:10.746701 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-19 11:33:10.746708 | orchestrator | Friday 19 September 2025 11:32:42 +0000 (0:00:00.195) 0:03:03.720 ****** 2025-09-19 11:33:10.746714 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746721 | orchestrator | 2025-09-19 11:33:10.746728 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-19 11:33:10.746735 | orchestrator | Friday 19 September 2025 11:32:42 +0000 (0:00:00.229) 0:03:03.950 ****** 2025-09-19 11:33:10.746742 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746749 | orchestrator | 2025-09-19 11:33:10.746756 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-19 11:33:10.746763 | orchestrator | Friday 19 September 2025 11:32:42 +0000 (0:00:00.209) 0:03:04.160 ****** 2025-09-19 11:33:10.746770 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746777 | orchestrator | 2025-09-19 11:33:10.746784 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-19 11:33:10.746791 | orchestrator | Friday 19 September 2025 11:32:42 +0000 (0:00:00.201) 0:03:04.361 ****** 2025-09-19 11:33:10.746798 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746805 | orchestrator | 2025-09-19 11:33:10.746812 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-19 11:33:10.746819 | orchestrator | Friday 19 September 2025 11:32:43 +0000 (0:00:00.201) 0:03:04.562 ****** 2025-09-19 11:33:10.746826 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746832 | orchestrator | 2025-09-19 11:33:10.746840 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-19 11:33:10.746850 | orchestrator | Friday 19 September 2025 11:32:43 +0000 (0:00:00.221) 0:03:04.784 ****** 2025-09-19 11:33:10.746857 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746864 | orchestrator | 2025-09-19 11:33:10.746871 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-19 11:33:10.746880 | orchestrator | Friday 19 September 2025 11:32:43 +0000 (0:00:00.192) 0:03:04.976 ****** 2025-09-19 11:33:10.746892 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746899 | orchestrator | 2025-09-19 11:33:10.746906 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-19 11:33:10.746913 | orchestrator | Friday 19 September 2025 11:32:43 +0000 (0:00:00.175) 0:03:05.152 ****** 2025-09-19 11:33:10.746944 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.746952 | orchestrator | 2025-09-19 11:33:10.746958 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-19 11:33:10.746965 | orchestrator | Friday 19 September 2025 11:32:43 +0000 (0:00:00.285) 0:03:05.438 ****** 2025-09-19 11:33:10.746972 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-19 11:33:10.746979 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-19 11:33:10.746986 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-19 11:33:10.746993 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-19 11:33:10.747000 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.747007 | orchestrator | 2025-09-19 11:33:10.747014 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-19 11:33:10.747021 | orchestrator | Friday 19 September 2025 11:32:44 +0000 (0:00:00.710) 0:03:06.148 ****** 2025-09-19 11:33:10.747028 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.747034 | orchestrator | 2025-09-19 11:33:10.747041 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-19 11:33:10.747048 | orchestrator | Friday 19 September 2025 11:32:44 +0000 (0:00:00.197) 0:03:06.346 ****** 2025-09-19 11:33:10.747055 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.747062 | orchestrator | 2025-09-19 11:33:10.747069 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-19 11:33:10.747076 | orchestrator | Friday 19 September 2025 11:32:45 +0000 (0:00:00.178) 0:03:06.524 ****** 2025-09-19 11:33:10.747083 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.747090 | orchestrator | 2025-09-19 11:33:10.747096 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-19 11:33:10.747103 | orchestrator | Friday 19 September 2025 11:32:45 +0000 (0:00:00.193) 0:03:06.718 ****** 2025-09-19 11:33:10.747110 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.747117 | orchestrator | 2025-09-19 11:33:10.747124 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-19 11:33:10.747131 | orchestrator | Friday 19 September 2025 11:32:45 +0000 (0:00:00.199) 0:03:06.918 ****** 2025-09-19 11:33:10.747138 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-19 11:33:10.747145 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-19 11:33:10.747152 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.747159 | orchestrator | 2025-09-19 11:33:10.747165 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-19 11:33:10.747171 | orchestrator | Friday 19 September 2025 11:32:45 +0000 (0:00:00.260) 0:03:07.179 ****** 2025-09-19 11:33:10.747178 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.747185 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.747192 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.747198 | orchestrator | 2025-09-19 11:33:10.747205 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-19 11:33:10.747212 | orchestrator | Friday 19 September 2025 11:32:46 +0000 (0:00:00.366) 0:03:07.546 ****** 2025-09-19 11:33:10.747219 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.747226 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.747233 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.747240 | orchestrator | 2025-09-19 11:33:10.747247 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-19 11:33:10.747254 | orchestrator | 2025-09-19 11:33:10.747261 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-19 11:33:10.747273 | orchestrator | Friday 19 September 2025 11:32:47 +0000 (0:00:01.232) 0:03:08.778 ****** 2025-09-19 11:33:10.747280 | orchestrator | ok: [testbed-manager] 2025-09-19 11:33:10.747287 | orchestrator | 2025-09-19 11:33:10.747294 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-19 11:33:10.747301 | orchestrator | Friday 19 September 2025 11:32:47 +0000 (0:00:00.146) 0:03:08.924 ****** 2025-09-19 11:33:10.747307 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 11:33:10.747314 | orchestrator | 2025-09-19 11:33:10.747321 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-19 11:33:10.747328 | orchestrator | Friday 19 September 2025 11:32:47 +0000 (0:00:00.248) 0:03:09.172 ****** 2025-09-19 11:33:10.747335 | orchestrator | changed: [testbed-manager] 2025-09-19 11:33:10.747342 | orchestrator | 2025-09-19 11:33:10.747349 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-19 11:33:10.747356 | orchestrator | 2025-09-19 11:33:10.747363 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-19 11:33:10.747369 | orchestrator | Friday 19 September 2025 11:32:53 +0000 (0:00:05.665) 0:03:14.838 ****** 2025-09-19 11:33:10.747376 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:33:10.747383 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:33:10.747390 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:33:10.747397 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:10.747404 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:10.747411 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:10.747417 | orchestrator | 2025-09-19 11:33:10.747424 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-19 11:33:10.747432 | orchestrator | Friday 19 September 2025 11:32:54 +0000 (0:00:01.151) 0:03:15.989 ****** 2025-09-19 11:33:10.747442 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 11:33:10.747449 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 11:33:10.747459 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 11:33:10.747466 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 11:33:10.747473 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 11:33:10.747480 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 11:33:10.747487 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 11:33:10.747494 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 11:33:10.747501 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 11:33:10.747508 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 11:33:10.747514 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 11:33:10.747521 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 11:33:10.747528 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 11:33:10.747535 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 11:33:10.747542 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 11:33:10.747549 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 11:33:10.747556 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 11:33:10.747563 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 11:33:10.747570 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 11:33:10.747581 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 11:33:10.747588 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 11:33:10.747595 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 11:33:10.747601 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 11:33:10.747608 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 11:33:10.747615 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 11:33:10.747622 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 11:33:10.747629 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 11:33:10.747636 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 11:33:10.747643 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 11:33:10.747650 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 11:33:10.747657 | orchestrator | 2025-09-19 11:33:10.747663 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-19 11:33:10.747670 | orchestrator | Friday 19 September 2025 11:33:06 +0000 (0:00:12.491) 0:03:28.480 ****** 2025-09-19 11:33:10.747677 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:10.747684 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:10.747691 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:10.747698 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.747705 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.747712 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.747718 | orchestrator | 2025-09-19 11:33:10.747725 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-19 11:33:10.747732 | orchestrator | Friday 19 September 2025 11:33:07 +0000 (0:00:00.645) 0:03:29.126 ****** 2025-09-19 11:33:10.747739 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:33:10.747746 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:33:10.747753 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:33:10.747760 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:10.747767 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:10.747774 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:10.747781 | orchestrator | 2025-09-19 11:33:10.747788 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:33:10.747795 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:33:10.747802 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-19 11:33:10.747810 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 11:33:10.747820 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 11:33:10.747830 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 11:33:10.747837 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 11:33:10.747844 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 11:33:10.747855 | orchestrator | 2025-09-19 11:33:10.747862 | orchestrator | 2025-09-19 11:33:10.747869 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:33:10.747876 | orchestrator | Friday 19 September 2025 11:33:08 +0000 (0:00:00.442) 0:03:29.569 ****** 2025-09-19 11:33:10.747883 | orchestrator | =============================================================================== 2025-09-19 11:33:10.747890 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 45.10s 2025-09-19 11:33:10.747897 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 23.55s 2025-09-19 11:33:10.747904 | orchestrator | Manage labels ---------------------------------------------------------- 12.49s 2025-09-19 11:33:10.747911 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.33s 2025-09-19 11:33:10.747926 | orchestrator | kubectl : Install required packages ------------------------------------ 11.92s 2025-09-19 11:33:10.747933 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 5.93s 2025-09-19 11:33:10.747940 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.67s 2025-09-19 11:33:10.747947 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.59s 2025-09-19 11:33:10.747954 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.97s 2025-09-19 11:33:10.747961 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.94s 2025-09-19 11:33:10.747968 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.83s 2025-09-19 11:33:10.747975 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.65s 2025-09-19 11:33:10.747982 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.61s 2025-09-19 11:33:10.747988 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.15s 2025-09-19 11:33:10.747995 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.02s 2025-09-19 11:33:10.748002 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 1.99s 2025-09-19 11:33:10.748009 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.96s 2025-09-19 11:33:10.748016 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.72s 2025-09-19 11:33:10.748023 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.58s 2025-09-19 11:33:10.748030 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.58s 2025-09-19 11:33:10.748037 | orchestrator | 2025-09-19 11:33:10 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:10.748044 | orchestrator | 2025-09-19 11:33:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:13.771290 | orchestrator | 2025-09-19 11:33:13 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:13.772049 | orchestrator | 2025-09-19 11:33:13 | INFO  | Task 8bc9f70c-6b1a-44e7-8e9c-920a3da8946f is in state STARTED 2025-09-19 11:33:13.773968 | orchestrator | 2025-09-19 11:33:13 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:13.775247 | orchestrator | 2025-09-19 11:33:13 | INFO  | Task 84ab3860-c70a-4615-a092-3085bd9430e1 is in state STARTED 2025-09-19 11:33:13.776139 | orchestrator | 2025-09-19 11:33:13 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:13.778220 | orchestrator | 2025-09-19 11:33:13 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:13.778282 | orchestrator | 2025-09-19 11:33:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:16.860828 | orchestrator | 2025-09-19 11:33:16 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:16.861124 | orchestrator | 2025-09-19 11:33:16 | INFO  | Task 8bc9f70c-6b1a-44e7-8e9c-920a3da8946f is in state STARTED 2025-09-19 11:33:16.864787 | orchestrator | 2025-09-19 11:33:16 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:16.865144 | orchestrator | 2025-09-19 11:33:16 | INFO  | Task 84ab3860-c70a-4615-a092-3085bd9430e1 is in state SUCCESS 2025-09-19 11:33:16.868822 | orchestrator | 2025-09-19 11:33:16 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:16.869467 | orchestrator | 2025-09-19 11:33:16 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:16.869516 | orchestrator | 2025-09-19 11:33:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:19.893844 | orchestrator | 2025-09-19 11:33:19 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:19.894123 | orchestrator | 2025-09-19 11:33:19 | INFO  | Task 8bc9f70c-6b1a-44e7-8e9c-920a3da8946f is in state SUCCESS 2025-09-19 11:33:19.894560 | orchestrator | 2025-09-19 11:33:19 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:19.895458 | orchestrator | 2025-09-19 11:33:19 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:19.895823 | orchestrator | 2025-09-19 11:33:19 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:19.895853 | orchestrator | 2025-09-19 11:33:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:22.951218 | orchestrator | 2025-09-19 11:33:22 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:22.951309 | orchestrator | 2025-09-19 11:33:22 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:22.951325 | orchestrator | 2025-09-19 11:33:22 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:22.951338 | orchestrator | 2025-09-19 11:33:22 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:22.951349 | orchestrator | 2025-09-19 11:33:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:26.018380 | orchestrator | 2025-09-19 11:33:26 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:26.019094 | orchestrator | 2025-09-19 11:33:26 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:26.020493 | orchestrator | 2025-09-19 11:33:26 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:26.022528 | orchestrator | 2025-09-19 11:33:26 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:26.022561 | orchestrator | 2025-09-19 11:33:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:29.079683 | orchestrator | 2025-09-19 11:33:29 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:29.080732 | orchestrator | 2025-09-19 11:33:29 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:29.083079 | orchestrator | 2025-09-19 11:33:29 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:29.085006 | orchestrator | 2025-09-19 11:33:29 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:29.085032 | orchestrator | 2025-09-19 11:33:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:32.133961 | orchestrator | 2025-09-19 11:33:32 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:32.137195 | orchestrator | 2025-09-19 11:33:32 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:32.138645 | orchestrator | 2025-09-19 11:33:32 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:32.141806 | orchestrator | 2025-09-19 11:33:32 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:32.141873 | orchestrator | 2025-09-19 11:33:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:35.250539 | orchestrator | 2025-09-19 11:33:35 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:35.251969 | orchestrator | 2025-09-19 11:33:35 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:35.253415 | orchestrator | 2025-09-19 11:33:35 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:35.256256 | orchestrator | 2025-09-19 11:33:35 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:35.256308 | orchestrator | 2025-09-19 11:33:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:38.311025 | orchestrator | 2025-09-19 11:33:38 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:38.312081 | orchestrator | 2025-09-19 11:33:38 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:38.313880 | orchestrator | 2025-09-19 11:33:38 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:38.315671 | orchestrator | 2025-09-19 11:33:38 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:38.315994 | orchestrator | 2025-09-19 11:33:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:41.361841 | orchestrator | 2025-09-19 11:33:41 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:41.364530 | orchestrator | 2025-09-19 11:33:41 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:41.367886 | orchestrator | 2025-09-19 11:33:41 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:41.371561 | orchestrator | 2025-09-19 11:33:41 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:41.371589 | orchestrator | 2025-09-19 11:33:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:44.410471 | orchestrator | 2025-09-19 11:33:44 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:44.410568 | orchestrator | 2025-09-19 11:33:44 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:44.412333 | orchestrator | 2025-09-19 11:33:44 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:44.415593 | orchestrator | 2025-09-19 11:33:44 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:44.415626 | orchestrator | 2025-09-19 11:33:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:47.446168 | orchestrator | 2025-09-19 11:33:47 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:47.448464 | orchestrator | 2025-09-19 11:33:47 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:47.449563 | orchestrator | 2025-09-19 11:33:47 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:47.450894 | orchestrator | 2025-09-19 11:33:47 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:47.451090 | orchestrator | 2025-09-19 11:33:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:50.488942 | orchestrator | 2025-09-19 11:33:50 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:50.490265 | orchestrator | 2025-09-19 11:33:50 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:50.491653 | orchestrator | 2025-09-19 11:33:50 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:50.492772 | orchestrator | 2025-09-19 11:33:50 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:50.493434 | orchestrator | 2025-09-19 11:33:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:53.537237 | orchestrator | 2025-09-19 11:33:53 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:53.538013 | orchestrator | 2025-09-19 11:33:53 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:53.539410 | orchestrator | 2025-09-19 11:33:53 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:53.540963 | orchestrator | 2025-09-19 11:33:53 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:53.541397 | orchestrator | 2025-09-19 11:33:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:56.579997 | orchestrator | 2025-09-19 11:33:56 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:56.580082 | orchestrator | 2025-09-19 11:33:56 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:56.580992 | orchestrator | 2025-09-19 11:33:56 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:56.581300 | orchestrator | 2025-09-19 11:33:56 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:56.581319 | orchestrator | 2025-09-19 11:33:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:59.616663 | orchestrator | 2025-09-19 11:33:59 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:33:59.619220 | orchestrator | 2025-09-19 11:33:59 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:33:59.621445 | orchestrator | 2025-09-19 11:33:59 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:33:59.623247 | orchestrator | 2025-09-19 11:33:59 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:33:59.623283 | orchestrator | 2025-09-19 11:33:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:02.669451 | orchestrator | 2025-09-19 11:34:02 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:02.676775 | orchestrator | 2025-09-19 11:34:02 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:02.676873 | orchestrator | 2025-09-19 11:34:02 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:34:02.676889 | orchestrator | 2025-09-19 11:34:02 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:02.676901 | orchestrator | 2025-09-19 11:34:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:05.705627 | orchestrator | 2025-09-19 11:34:05 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:05.707653 | orchestrator | 2025-09-19 11:34:05 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:05.709404 | orchestrator | 2025-09-19 11:34:05 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:34:05.711080 | orchestrator | 2025-09-19 11:34:05 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:05.711114 | orchestrator | 2025-09-19 11:34:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:08.754780 | orchestrator | 2025-09-19 11:34:08 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:08.757437 | orchestrator | 2025-09-19 11:34:08 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:08.759463 | orchestrator | 2025-09-19 11:34:08 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:34:08.761645 | orchestrator | 2025-09-19 11:34:08 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:08.761669 | orchestrator | 2025-09-19 11:34:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:11.822442 | orchestrator | 2025-09-19 11:34:11 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:11.824529 | orchestrator | 2025-09-19 11:34:11 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:11.826696 | orchestrator | 2025-09-19 11:34:11 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:34:11.828218 | orchestrator | 2025-09-19 11:34:11 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:11.828242 | orchestrator | 2025-09-19 11:34:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:14.866577 | orchestrator | 2025-09-19 11:34:14 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:14.867041 | orchestrator | 2025-09-19 11:34:14 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:14.869889 | orchestrator | 2025-09-19 11:34:14 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:34:14.871043 | orchestrator | 2025-09-19 11:34:14 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:14.871069 | orchestrator | 2025-09-19 11:34:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:17.913404 | orchestrator | 2025-09-19 11:34:17 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:17.914812 | orchestrator | 2025-09-19 11:34:17 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:17.916784 | orchestrator | 2025-09-19 11:34:17 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:34:17.919048 | orchestrator | 2025-09-19 11:34:17 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:17.919075 | orchestrator | 2025-09-19 11:34:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:20.948982 | orchestrator | 2025-09-19 11:34:20 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:20.955352 | orchestrator | 2025-09-19 11:34:20 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:20.959030 | orchestrator | 2025-09-19 11:34:20 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:34:20.961439 | orchestrator | 2025-09-19 11:34:20 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:20.961567 | orchestrator | 2025-09-19 11:34:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:24.015938 | orchestrator | 2025-09-19 11:34:24 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:24.016069 | orchestrator | 2025-09-19 11:34:24 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:24.018608 | orchestrator | 2025-09-19 11:34:24 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:34:24.021602 | orchestrator | 2025-09-19 11:34:24 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:24.021638 | orchestrator | 2025-09-19 11:34:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:27.057318 | orchestrator | 2025-09-19 11:34:27 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:27.057407 | orchestrator | 2025-09-19 11:34:27 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:27.059040 | orchestrator | 2025-09-19 11:34:27 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:34:27.060544 | orchestrator | 2025-09-19 11:34:27 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:27.060922 | orchestrator | 2025-09-19 11:34:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:30.109840 | orchestrator | 2025-09-19 11:34:30 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:30.111659 | orchestrator | 2025-09-19 11:34:30 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:30.111981 | orchestrator | 2025-09-19 11:34:30 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:34:30.114285 | orchestrator | 2025-09-19 11:34:30 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:30.114308 | orchestrator | 2025-09-19 11:34:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:33.158332 | orchestrator | 2025-09-19 11:34:33 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:33.159473 | orchestrator | 2025-09-19 11:34:33 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:33.161666 | orchestrator | 2025-09-19 11:34:33 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:34:33.163321 | orchestrator | 2025-09-19 11:34:33 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:33.163644 | orchestrator | 2025-09-19 11:34:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:36.197569 | orchestrator | 2025-09-19 11:34:36 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:36.198582 | orchestrator | 2025-09-19 11:34:36 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:36.199920 | orchestrator | 2025-09-19 11:34:36 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state STARTED 2025-09-19 11:34:36.202422 | orchestrator | 2025-09-19 11:34:36 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:36.202985 | orchestrator | 2025-09-19 11:34:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:39.272024 | orchestrator | 2025-09-19 11:34:39 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:39.273865 | orchestrator | 2025-09-19 11:34:39 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:39.274828 | orchestrator | 2025-09-19 11:34:39 | INFO  | Task 5d166331-f3dc-49ba-9b84-f5e8141f26fe is in state SUCCESS 2025-09-19 11:34:39.276899 | orchestrator | 2025-09-19 11:34:39.276936 | orchestrator | 2025-09-19 11:34:39.276945 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-19 11:34:39.276952 | orchestrator | 2025-09-19 11:34:39.276976 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 11:34:39.276983 | orchestrator | Friday 19 September 2025 11:33:12 +0000 (0:00:00.377) 0:00:00.377 ****** 2025-09-19 11:34:39.276991 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 11:34:39.276998 | orchestrator | 2025-09-19 11:34:39.277005 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 11:34:39.277012 | orchestrator | Friday 19 September 2025 11:33:13 +0000 (0:00:00.890) 0:00:01.267 ****** 2025-09-19 11:34:39.277019 | orchestrator | changed: [testbed-manager] 2025-09-19 11:34:39.277026 | orchestrator | 2025-09-19 11:34:39.277034 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-19 11:34:39.277041 | orchestrator | Friday 19 September 2025 11:33:15 +0000 (0:00:01.544) 0:00:02.811 ****** 2025-09-19 11:34:39.277048 | orchestrator | changed: [testbed-manager] 2025-09-19 11:34:39.277055 | orchestrator | 2025-09-19 11:34:39.277063 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:34:39.277069 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:34:39.277078 | orchestrator | 2025-09-19 11:34:39.277086 | orchestrator | 2025-09-19 11:34:39.277093 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:34:39.277100 | orchestrator | Friday 19 September 2025 11:33:15 +0000 (0:00:00.436) 0:00:03.248 ****** 2025-09-19 11:34:39.277107 | orchestrator | =============================================================================== 2025-09-19 11:34:39.277113 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.54s 2025-09-19 11:34:39.277120 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.89s 2025-09-19 11:34:39.277126 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.44s 2025-09-19 11:34:39.277132 | orchestrator | 2025-09-19 11:34:39.277138 | orchestrator | 2025-09-19 11:34:39.277144 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-19 11:34:39.277150 | orchestrator | 2025-09-19 11:34:39.277157 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-19 11:34:39.277163 | orchestrator | Friday 19 September 2025 11:33:12 +0000 (0:00:00.259) 0:00:00.259 ****** 2025-09-19 11:34:39.277169 | orchestrator | ok: [testbed-manager] 2025-09-19 11:34:39.277177 | orchestrator | 2025-09-19 11:34:39.277183 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-19 11:34:39.277190 | orchestrator | Friday 19 September 2025 11:33:12 +0000 (0:00:00.550) 0:00:00.810 ****** 2025-09-19 11:34:39.277196 | orchestrator | ok: [testbed-manager] 2025-09-19 11:34:39.277202 | orchestrator | 2025-09-19 11:34:39.277209 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 11:34:39.277216 | orchestrator | Friday 19 September 2025 11:33:13 +0000 (0:00:00.527) 0:00:01.338 ****** 2025-09-19 11:34:39.277222 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 11:34:39.277228 | orchestrator | 2025-09-19 11:34:39.277234 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 11:34:39.277241 | orchestrator | Friday 19 September 2025 11:33:13 +0000 (0:00:00.654) 0:00:01.992 ****** 2025-09-19 11:34:39.277248 | orchestrator | changed: [testbed-manager] 2025-09-19 11:34:39.277254 | orchestrator | 2025-09-19 11:34:39.277261 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-19 11:34:39.277269 | orchestrator | Friday 19 September 2025 11:33:15 +0000 (0:00:01.188) 0:00:03.181 ****** 2025-09-19 11:34:39.277276 | orchestrator | changed: [testbed-manager] 2025-09-19 11:34:39.277282 | orchestrator | 2025-09-19 11:34:39.277289 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-19 11:34:39.277305 | orchestrator | Friday 19 September 2025 11:33:15 +0000 (0:00:00.769) 0:00:03.951 ****** 2025-09-19 11:34:39.277312 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 11:34:39.277327 | orchestrator | 2025-09-19 11:34:39.277334 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-19 11:34:39.277341 | orchestrator | Friday 19 September 2025 11:33:17 +0000 (0:00:01.395) 0:00:05.347 ****** 2025-09-19 11:34:39.277347 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 11:34:39.277354 | orchestrator | 2025-09-19 11:34:39.277361 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-19 11:34:39.277368 | orchestrator | Friday 19 September 2025 11:33:17 +0000 (0:00:00.760) 0:00:06.107 ****** 2025-09-19 11:34:39.277374 | orchestrator | ok: [testbed-manager] 2025-09-19 11:34:39.277381 | orchestrator | 2025-09-19 11:34:39.277388 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-19 11:34:39.277394 | orchestrator | Friday 19 September 2025 11:33:18 +0000 (0:00:00.357) 0:00:06.464 ****** 2025-09-19 11:34:39.277401 | orchestrator | ok: [testbed-manager] 2025-09-19 11:34:39.277408 | orchestrator | 2025-09-19 11:34:39.277414 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:34:39.277421 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:34:39.277428 | orchestrator | 2025-09-19 11:34:39.277435 | orchestrator | 2025-09-19 11:34:39.277443 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:34:39.277450 | orchestrator | Friday 19 September 2025 11:33:18 +0000 (0:00:00.292) 0:00:06.757 ****** 2025-09-19 11:34:39.277456 | orchestrator | =============================================================================== 2025-09-19 11:34:39.277463 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.40s 2025-09-19 11:34:39.277469 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.19s 2025-09-19 11:34:39.277476 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.77s 2025-09-19 11:34:39.277494 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.76s 2025-09-19 11:34:39.277501 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.65s 2025-09-19 11:34:39.277507 | orchestrator | Get home directory of operator user ------------------------------------- 0.55s 2025-09-19 11:34:39.277513 | orchestrator | Create .kube directory -------------------------------------------------- 0.53s 2025-09-19 11:34:39.277519 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.36s 2025-09-19 11:34:39.277526 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.29s 2025-09-19 11:34:39.277532 | orchestrator | 2025-09-19 11:34:39.277538 | orchestrator | 2025-09-19 11:34:39.277583 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-19 11:34:39.277593 | orchestrator | 2025-09-19 11:34:39.277600 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-19 11:34:39.277607 | orchestrator | Friday 19 September 2025 11:32:15 +0000 (0:00:00.086) 0:00:00.086 ****** 2025-09-19 11:34:39.277613 | orchestrator | ok: [localhost] => { 2025-09-19 11:34:39.277620 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-19 11:34:39.277627 | orchestrator | } 2025-09-19 11:34:39.277634 | orchestrator | 2025-09-19 11:34:39.277641 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-19 11:34:39.277647 | orchestrator | Friday 19 September 2025 11:32:15 +0000 (0:00:00.045) 0:00:00.132 ****** 2025-09-19 11:34:39.277655 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-19 11:34:39.277664 | orchestrator | ...ignoring 2025-09-19 11:34:39.277671 | orchestrator | 2025-09-19 11:34:39.277678 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-19 11:34:39.277685 | orchestrator | Friday 19 September 2025 11:32:19 +0000 (0:00:03.889) 0:00:04.021 ****** 2025-09-19 11:34:39.277692 | orchestrator | skipping: [localhost] 2025-09-19 11:34:39.277707 | orchestrator | 2025-09-19 11:34:39.277714 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-19 11:34:39.277721 | orchestrator | Friday 19 September 2025 11:32:19 +0000 (0:00:00.041) 0:00:04.063 ****** 2025-09-19 11:34:39.277728 | orchestrator | ok: [localhost] 2025-09-19 11:34:39.277735 | orchestrator | 2025-09-19 11:34:39.277742 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:34:39.277749 | orchestrator | 2025-09-19 11:34:39.277772 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:34:39.277779 | orchestrator | Friday 19 September 2025 11:32:19 +0000 (0:00:00.124) 0:00:04.188 ****** 2025-09-19 11:34:39.277812 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:34:39.277821 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:34:39.277827 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:34:39.277832 | orchestrator | 2025-09-19 11:34:39.277836 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:34:39.277840 | orchestrator | Friday 19 September 2025 11:32:20 +0000 (0:00:00.289) 0:00:04.477 ****** 2025-09-19 11:34:39.277844 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-19 11:34:39.277849 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-19 11:34:39.277853 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-19 11:34:39.277857 | orchestrator | 2025-09-19 11:34:39.277861 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-19 11:34:39.277865 | orchestrator | 2025-09-19 11:34:39.277869 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 11:34:39.277873 | orchestrator | Friday 19 September 2025 11:32:20 +0000 (0:00:00.422) 0:00:04.900 ****** 2025-09-19 11:34:39.277883 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:34:39.277888 | orchestrator | 2025-09-19 11:34:39.277892 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-19 11:34:39.277896 | orchestrator | Friday 19 September 2025 11:32:21 +0000 (0:00:00.565) 0:00:05.466 ****** 2025-09-19 11:34:39.277900 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:34:39.277904 | orchestrator | 2025-09-19 11:34:39.277908 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-19 11:34:39.277912 | orchestrator | Friday 19 September 2025 11:32:22 +0000 (0:00:01.118) 0:00:06.584 ****** 2025-09-19 11:34:39.277916 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:34:39.277921 | orchestrator | 2025-09-19 11:34:39.277925 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-19 11:34:39.277929 | orchestrator | Friday 19 September 2025 11:32:22 +0000 (0:00:00.373) 0:00:06.958 ****** 2025-09-19 11:34:39.277933 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:34:39.277937 | orchestrator | 2025-09-19 11:34:39.277941 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-19 11:34:39.277945 | orchestrator | Friday 19 September 2025 11:32:22 +0000 (0:00:00.361) 0:00:07.319 ****** 2025-09-19 11:34:39.277949 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:34:39.277953 | orchestrator | 2025-09-19 11:34:39.277957 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-19 11:34:39.277961 | orchestrator | Friday 19 September 2025 11:32:23 +0000 (0:00:00.370) 0:00:07.689 ****** 2025-09-19 11:34:39.277965 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:34:39.277969 | orchestrator | 2025-09-19 11:34:39.277973 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 11:34:39.277977 | orchestrator | Friday 19 September 2025 11:32:23 +0000 (0:00:00.369) 0:00:08.058 ****** 2025-09-19 11:34:39.277981 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:34:39.277985 | orchestrator | 2025-09-19 11:34:39.277989 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-19 11:34:39.278006 | orchestrator | Friday 19 September 2025 11:32:24 +0000 (0:00:00.929) 0:00:08.988 ****** 2025-09-19 11:34:39.278011 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:34:39.278062 | orchestrator | 2025-09-19 11:34:39.278069 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-19 11:34:39.278073 | orchestrator | Friday 19 September 2025 11:32:25 +0000 (0:00:01.152) 0:00:10.141 ****** 2025-09-19 11:34:39.278077 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:34:39.278081 | orchestrator | 2025-09-19 11:34:39.278085 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-19 11:34:39.278090 | orchestrator | Friday 19 September 2025 11:32:26 +0000 (0:00:00.592) 0:00:10.733 ****** 2025-09-19 11:34:39.278094 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:34:39.278098 | orchestrator | 2025-09-19 11:34:39.278102 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-19 11:34:39.278106 | orchestrator | Friday 19 September 2025 11:32:26 +0000 (0:00:00.472) 0:00:11.206 ****** 2025-09-19 11:34:39.278114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:34:39.278124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:34:39.278130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:34:39.278142 | orchestrator | 2025-09-19 11:34:39.278147 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-19 11:34:39.278151 | orchestrator | Friday 19 September 2025 11:32:28 +0000 (0:00:01.365) 0:00:12.571 ****** 2025-09-19 11:34:39.278161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:34:39.278165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:34:39.278173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:34:39.278177 | orchestrator | 2025-09-19 11:34:39.278182 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-19 11:34:39.278186 | orchestrator | Friday 19 September 2025 11:32:31 +0000 (0:00:02.827) 0:00:15.399 ****** 2025-09-19 11:34:39.278190 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 11:34:39.278197 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 11:34:39.278201 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 11:34:39.278205 | orchestrator | 2025-09-19 11:34:39.278209 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-19 11:34:39.278213 | orchestrator | Friday 19 September 2025 11:32:32 +0000 (0:00:01.570) 0:00:16.969 ****** 2025-09-19 11:34:39.278217 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 11:34:39.278222 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 11:34:39.278226 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 11:34:39.278230 | orchestrator | 2025-09-19 11:34:39.278234 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-19 11:34:39.278241 | orchestrator | Friday 19 September 2025 11:32:35 +0000 (0:00:02.745) 0:00:19.715 ****** 2025-09-19 11:34:39.278245 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 11:34:39.278249 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 11:34:39.278253 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 11:34:39.278258 | orchestrator | 2025-09-19 11:34:39.278262 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-19 11:34:39.278266 | orchestrator | Friday 19 September 2025 11:32:36 +0000 (0:00:01.641) 0:00:21.356 ****** 2025-09-19 11:34:39.278270 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 11:34:39.278274 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 11:34:39.278278 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 11:34:39.278282 | orchestrator | 2025-09-19 11:34:39.278286 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-19 11:34:39.278290 | orchestrator | Friday 19 September 2025 11:32:39 +0000 (0:00:02.562) 0:00:23.919 ****** 2025-09-19 11:34:39.278294 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 11:34:39.278298 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 11:34:39.278302 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 11:34:39.278306 | orchestrator | 2025-09-19 11:34:39.278310 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-19 11:34:39.278313 | orchestrator | Friday 19 September 2025 11:32:41 +0000 (0:00:01.669) 0:00:25.588 ****** 2025-09-19 11:34:39.278317 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 11:34:39.278321 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 11:34:39.278325 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 11:34:39.278329 | orchestrator | 2025-09-19 11:34:39.278334 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 11:34:39.278340 | orchestrator | Friday 19 September 2025 11:32:42 +0000 (0:00:01.724) 0:00:27.313 ****** 2025-09-19 11:34:39.278346 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:34:39.278352 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:34:39.278359 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:34:39.278366 | orchestrator | 2025-09-19 11:34:39.278370 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-19 11:34:39.278374 | orchestrator | Friday 19 September 2025 11:32:43 +0000 (0:00:00.912) 0:00:28.225 ****** 2025-09-19 11:34:39.278383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:34:39.278391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:34:39.278396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:34:39.278400 | orchestrator | 2025-09-19 11:34:39.278404 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-19 11:34:39.278408 | orchestrator | Friday 19 September 2025 11:32:45 +0000 (0:00:01.571) 0:00:29.796 ****** 2025-09-19 11:34:39.278412 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:34:39.278416 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:34:39.278420 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:34:39.278424 | orchestrator | 2025-09-19 11:34:39.278427 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-19 11:34:39.278431 | orchestrator | Friday 19 September 2025 11:32:46 +0000 (0:00:00.989) 0:00:30.786 ****** 2025-09-19 11:34:39.278435 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:34:39.278439 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:34:39.278446 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:34:39.278450 | orchestrator | 2025-09-19 11:34:39.278455 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-19 11:34:39.278462 | orchestrator | Friday 19 September 2025 11:32:54 +0000 (0:00:08.032) 0:00:38.819 ****** 2025-09-19 11:34:39.278469 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:34:39.278476 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:34:39.278483 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:34:39.278490 | orchestrator | 2025-09-19 11:34:39.278497 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 11:34:39.278504 | orchestrator | 2025-09-19 11:34:39.278510 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 11:34:39.278516 | orchestrator | Friday 19 September 2025 11:32:56 +0000 (0:00:01.991) 0:00:40.810 ****** 2025-09-19 11:34:39.278522 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:34:39.278529 | orchestrator | 2025-09-19 11:34:39.278538 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 11:34:39.278548 | orchestrator | Friday 19 September 2025 11:32:57 +0000 (0:00:01.194) 0:00:42.005 ****** 2025-09-19 11:34:39.278557 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:34:39.278563 | orchestrator | 2025-09-19 11:34:39.278571 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 11:34:39.278583 | orchestrator | Friday 19 September 2025 11:32:58 +0000 (0:00:00.711) 0:00:42.716 ****** 2025-09-19 11:34:39.278591 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:34:39.278600 | orchestrator | 2025-09-19 11:34:39.278606 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 11:34:39.278613 | orchestrator | Friday 19 September 2025 11:33:00 +0000 (0:00:02.238) 0:00:44.955 ****** 2025-09-19 11:34:39.278619 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:34:39.278626 | orchestrator | 2025-09-19 11:34:39.278632 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 11:34:39.278638 | orchestrator | 2025-09-19 11:34:39.278645 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 11:34:39.278652 | orchestrator | Friday 19 September 2025 11:33:56 +0000 (0:00:55.759) 0:01:40.715 ****** 2025-09-19 11:34:39.278658 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:34:39.278664 | orchestrator | 2025-09-19 11:34:39.278671 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 11:34:39.278677 | orchestrator | Friday 19 September 2025 11:33:56 +0000 (0:00:00.599) 0:01:41.314 ****** 2025-09-19 11:34:39.278684 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:34:39.278690 | orchestrator | 2025-09-19 11:34:39.278697 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 11:34:39.278704 | orchestrator | Friday 19 September 2025 11:33:57 +0000 (0:00:00.245) 0:01:41.560 ****** 2025-09-19 11:34:39.278711 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:34:39.278718 | orchestrator | 2025-09-19 11:34:39.278725 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 11:34:39.278732 | orchestrator | Friday 19 September 2025 11:33:59 +0000 (0:00:01.956) 0:01:43.516 ****** 2025-09-19 11:34:39.278739 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:34:39.278747 | orchestrator | 2025-09-19 11:34:39.278753 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 11:34:39.278781 | orchestrator | 2025-09-19 11:34:39.278787 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 11:34:39.278791 | orchestrator | Friday 19 September 2025 11:34:16 +0000 (0:00:16.881) 0:02:00.397 ****** 2025-09-19 11:34:39.278795 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:34:39.278799 | orchestrator | 2025-09-19 11:34:39.278806 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 11:34:39.278810 | orchestrator | Friday 19 September 2025 11:34:16 +0000 (0:00:00.608) 0:02:01.006 ****** 2025-09-19 11:34:39.278814 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:34:39.278825 | orchestrator | 2025-09-19 11:34:39.278829 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 11:34:39.278833 | orchestrator | Friday 19 September 2025 11:34:16 +0000 (0:00:00.236) 0:02:01.242 ****** 2025-09-19 11:34:39.278837 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:34:39.278841 | orchestrator | 2025-09-19 11:34:39.278845 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 11:34:39.278848 | orchestrator | Friday 19 September 2025 11:34:18 +0000 (0:00:01.585) 0:02:02.828 ****** 2025-09-19 11:34:39.278852 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:34:39.278856 | orchestrator | 2025-09-19 11:34:39.278860 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-19 11:34:39.278864 | orchestrator | 2025-09-19 11:34:39.278868 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-19 11:34:39.278872 | orchestrator | Friday 19 September 2025 11:34:34 +0000 (0:00:15.834) 0:02:18.663 ****** 2025-09-19 11:34:39.278877 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:34:39.278881 | orchestrator | 2025-09-19 11:34:39.278885 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-19 11:34:39.278888 | orchestrator | Friday 19 September 2025 11:34:34 +0000 (0:00:00.692) 0:02:19.357 ****** 2025-09-19 11:34:39.278892 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-19 11:34:39.278896 | orchestrator | enable_outward_rabbitmq_True 2025-09-19 11:34:39.278900 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-19 11:34:39.278904 | orchestrator | outward_rabbitmq_restart 2025-09-19 11:34:39.278908 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:34:39.278912 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:34:39.278916 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:34:39.278920 | orchestrator | 2025-09-19 11:34:39.278924 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-19 11:34:39.278928 | orchestrator | skipping: no hosts matched 2025-09-19 11:34:39.278932 | orchestrator | 2025-09-19 11:34:39.278935 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-19 11:34:39.278939 | orchestrator | skipping: no hosts matched 2025-09-19 11:34:39.278943 | orchestrator | 2025-09-19 11:34:39.278947 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-19 11:34:39.278951 | orchestrator | skipping: no hosts matched 2025-09-19 11:34:39.278955 | orchestrator | 2025-09-19 11:34:39.278959 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:34:39.278963 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-19 11:34:39.278968 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 11:34:39.278972 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:34:39.278976 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:34:39.278980 | orchestrator | 2025-09-19 11:34:39.278984 | orchestrator | 2025-09-19 11:34:39.278988 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:34:39.278992 | orchestrator | Friday 19 September 2025 11:34:37 +0000 (0:00:02.363) 0:02:21.720 ****** 2025-09-19 11:34:39.278999 | orchestrator | =============================================================================== 2025-09-19 11:34:39.279003 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 88.48s 2025-09-19 11:34:39.279007 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.03s 2025-09-19 11:34:39.279011 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.78s 2025-09-19 11:34:39.279018 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.89s 2025-09-19 11:34:39.279021 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.83s 2025-09-19 11:34:39.279025 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.75s 2025-09-19 11:34:39.279029 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.56s 2025-09-19 11:34:39.279033 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.40s 2025-09-19 11:34:39.279037 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.36s 2025-09-19 11:34:39.279041 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 1.99s 2025-09-19 11:34:39.279044 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.72s 2025-09-19 11:34:39.279048 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.67s 2025-09-19 11:34:39.279052 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.64s 2025-09-19 11:34:39.279056 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.57s 2025-09-19 11:34:39.279060 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.57s 2025-09-19 11:34:39.279064 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.37s 2025-09-19 11:34:39.279068 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.19s 2025-09-19 11:34:39.279074 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.15s 2025-09-19 11:34:39.279078 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.12s 2025-09-19 11:34:39.279082 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.99s 2025-09-19 11:34:39.279086 | orchestrator | 2025-09-19 11:34:39 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:39.279091 | orchestrator | 2025-09-19 11:34:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:42.329528 | orchestrator | 2025-09-19 11:34:42 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:42.336127 | orchestrator | 2025-09-19 11:34:42 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:42.339705 | orchestrator | 2025-09-19 11:34:42 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:42.341465 | orchestrator | 2025-09-19 11:34:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:45.377446 | orchestrator | 2025-09-19 11:34:45 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:45.380868 | orchestrator | 2025-09-19 11:34:45 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:45.384155 | orchestrator | 2025-09-19 11:34:45 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:45.384182 | orchestrator | 2025-09-19 11:34:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:48.413374 | orchestrator | 2025-09-19 11:34:48 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:48.413476 | orchestrator | 2025-09-19 11:34:48 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:48.414208 | orchestrator | 2025-09-19 11:34:48 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:48.414235 | orchestrator | 2025-09-19 11:34:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:51.466644 | orchestrator | 2025-09-19 11:34:51 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:51.467993 | orchestrator | 2025-09-19 11:34:51 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:51.469212 | orchestrator | 2025-09-19 11:34:51 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:51.469285 | orchestrator | 2025-09-19 11:34:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:54.501958 | orchestrator | 2025-09-19 11:34:54 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:54.502116 | orchestrator | 2025-09-19 11:34:54 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:54.503080 | orchestrator | 2025-09-19 11:34:54 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:54.503122 | orchestrator | 2025-09-19 11:34:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:57.537674 | orchestrator | 2025-09-19 11:34:57 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:34:57.537817 | orchestrator | 2025-09-19 11:34:57 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:34:57.541968 | orchestrator | 2025-09-19 11:34:57 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:34:57.541990 | orchestrator | 2025-09-19 11:34:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:00.579682 | orchestrator | 2025-09-19 11:35:00 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:00.580051 | orchestrator | 2025-09-19 11:35:00 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:00.580627 | orchestrator | 2025-09-19 11:35:00 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:35:00.580658 | orchestrator | 2025-09-19 11:35:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:03.618857 | orchestrator | 2025-09-19 11:35:03 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:03.620139 | orchestrator | 2025-09-19 11:35:03 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:03.621395 | orchestrator | 2025-09-19 11:35:03 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:35:03.621798 | orchestrator | 2025-09-19 11:35:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:06.670116 | orchestrator | 2025-09-19 11:35:06 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:06.670959 | orchestrator | 2025-09-19 11:35:06 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:06.671340 | orchestrator | 2025-09-19 11:35:06 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:35:06.671376 | orchestrator | 2025-09-19 11:35:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:09.702585 | orchestrator | 2025-09-19 11:35:09 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:09.702809 | orchestrator | 2025-09-19 11:35:09 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:09.704248 | orchestrator | 2025-09-19 11:35:09 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:35:09.704272 | orchestrator | 2025-09-19 11:35:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:12.742555 | orchestrator | 2025-09-19 11:35:12 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:12.744159 | orchestrator | 2025-09-19 11:35:12 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:12.745866 | orchestrator | 2025-09-19 11:35:12 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:35:12.745899 | orchestrator | 2025-09-19 11:35:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:15.796648 | orchestrator | 2025-09-19 11:35:15 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:15.798358 | orchestrator | 2025-09-19 11:35:15 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:15.800864 | orchestrator | 2025-09-19 11:35:15 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:35:15.800902 | orchestrator | 2025-09-19 11:35:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:18.852428 | orchestrator | 2025-09-19 11:35:18 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:18.852555 | orchestrator | 2025-09-19 11:35:18 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:18.852648 | orchestrator | 2025-09-19 11:35:18 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:35:18.852665 | orchestrator | 2025-09-19 11:35:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:21.906190 | orchestrator | 2025-09-19 11:35:21 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:21.907780 | orchestrator | 2025-09-19 11:35:21 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:21.909772 | orchestrator | 2025-09-19 11:35:21 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:35:21.909830 | orchestrator | 2025-09-19 11:35:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:24.965622 | orchestrator | 2025-09-19 11:35:24 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:24.966083 | orchestrator | 2025-09-19 11:35:24 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:24.967475 | orchestrator | 2025-09-19 11:35:24 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:35:24.967546 | orchestrator | 2025-09-19 11:35:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:28.017814 | orchestrator | 2025-09-19 11:35:28 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:28.019710 | orchestrator | 2025-09-19 11:35:28 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:28.021894 | orchestrator | 2025-09-19 11:35:28 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state STARTED 2025-09-19 11:35:28.022571 | orchestrator | 2025-09-19 11:35:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:31.069140 | orchestrator | 2025-09-19 11:35:31 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:31.070450 | orchestrator | 2025-09-19 11:35:31 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:31.075011 | orchestrator | 2025-09-19 11:35:31 | INFO  | Task 23416133-a0cd-4a40-a881-402806c06668 is in state SUCCESS 2025-09-19 11:35:31.077171 | orchestrator | 2025-09-19 11:35:31.077288 | orchestrator | 2025-09-19 11:35:31.077302 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:35:31.077314 | orchestrator | 2025-09-19 11:35:31.077326 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:35:31.077338 | orchestrator | Friday 19 September 2025 11:33:09 +0000 (0:00:00.188) 0:00:00.188 ****** 2025-09-19 11:35:31.077349 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.077362 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.077442 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.077485 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:31.077582 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:31.077597 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:31.077608 | orchestrator | 2025-09-19 11:35:31.077619 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:35:31.077656 | orchestrator | Friday 19 September 2025 11:33:09 +0000 (0:00:00.775) 0:00:00.964 ****** 2025-09-19 11:35:31.077694 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-19 11:35:31.077708 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-19 11:35:31.077720 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-19 11:35:31.077732 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-19 11:35:31.077744 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-19 11:35:31.077756 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-19 11:35:31.077769 | orchestrator | 2025-09-19 11:35:31.077781 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-19 11:35:31.077793 | orchestrator | 2025-09-19 11:35:31.077805 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-19 11:35:31.077818 | orchestrator | Friday 19 September 2025 11:33:11 +0000 (0:00:02.004) 0:00:02.968 ****** 2025-09-19 11:35:31.077832 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:31.077845 | orchestrator | 2025-09-19 11:35:31.077857 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-19 11:35:31.077869 | orchestrator | Friday 19 September 2025 11:33:13 +0000 (0:00:01.884) 0:00:04.853 ****** 2025-09-19 11:35:31.077882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.077926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.077955 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.077966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.077977 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078000 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078100 | orchestrator | 2025-09-19 11:35:31.078129 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-19 11:35:31.078141 | orchestrator | Friday 19 September 2025 11:33:15 +0000 (0:00:01.236) 0:00:06.089 ****** 2025-09-19 11:35:31.078152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078186 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078197 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078208 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078219 | orchestrator | 2025-09-19 11:35:31.078231 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-19 11:35:31.078242 | orchestrator | Friday 19 September 2025 11:33:16 +0000 (0:00:01.852) 0:00:07.942 ****** 2025-09-19 11:35:31.078341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078411 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078422 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078433 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078444 | orchestrator | 2025-09-19 11:35:31.078455 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-19 11:35:31.078466 | orchestrator | Friday 19 September 2025 11:33:18 +0000 (0:00:01.151) 0:00:09.094 ****** 2025-09-19 11:35:31.078477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078504 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078535 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078584 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078596 | orchestrator | 2025-09-19 11:35:31.078616 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-19 11:35:31.078627 | orchestrator | Friday 19 September 2025 11:33:19 +0000 (0:00:01.616) 0:00:10.711 ****** 2025-09-19 11:35:31.078638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078712 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078723 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.078742 | orchestrator | 2025-09-19 11:35:31.078753 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-19 11:35:31.078770 | orchestrator | Friday 19 September 2025 11:33:20 +0000 (0:00:01.104) 0:00:11.816 ****** 2025-09-19 11:35:31.078782 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:31.078793 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:31.078804 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:31.078844 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:31.078855 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:31.078865 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:31.078876 | orchestrator | 2025-09-19 11:35:31.078887 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-19 11:35:31.078898 | orchestrator | Friday 19 September 2025 11:33:23 +0000 (0:00:02.598) 0:00:14.414 ****** 2025-09-19 11:35:31.078908 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-19 11:35:31.078919 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-19 11:35:31.078930 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-19 11:35:31.078940 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-19 11:35:31.078951 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-19 11:35:31.078961 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 11:35:31.078972 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-19 11:35:31.078982 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 11:35:31.079000 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 11:35:31.079011 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 11:35:31.079021 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 11:35:31.079032 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 11:35:31.079044 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 11:35:31.079055 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 11:35:31.079066 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 11:35:31.079076 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 11:35:31.079087 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 11:35:31.079098 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 11:35:31.079110 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 11:35:31.079121 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 11:35:31.079132 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 11:35:31.079142 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 11:35:31.079161 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 11:35:31.079172 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 11:35:31.079182 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 11:35:31.079193 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 11:35:31.079204 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 11:35:31.079214 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 11:35:31.079225 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 11:35:31.079236 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 11:35:31.079246 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 11:35:31.079257 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 11:35:31.079268 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 11:35:31.079284 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 11:35:31.079294 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 11:35:31.079305 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 11:35:31.079316 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 11:35:31.079326 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 11:35:31.079337 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 11:35:31.079348 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 11:35:31.079358 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 11:35:31.079369 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-19 11:35:31.079380 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-19 11:35:31.079391 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 11:35:31.079407 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-19 11:35:31.079418 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-19 11:35:31.079429 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-19 11:35:31.079440 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 11:35:31.079451 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 11:35:31.079462 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-19 11:35:31.079480 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 11:35:31.079491 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 11:35:31.079502 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 11:35:31.079512 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 11:35:31.079523 | orchestrator | 2025-09-19 11:35:31.079534 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 11:35:31.079544 | orchestrator | Friday 19 September 2025 11:33:43 +0000 (0:00:20.467) 0:00:34.881 ****** 2025-09-19 11:35:31.079555 | orchestrator | 2025-09-19 11:35:31.079565 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 11:35:31.079576 | orchestrator | Friday 19 September 2025 11:33:44 +0000 (0:00:00.248) 0:00:35.130 ****** 2025-09-19 11:35:31.079587 | orchestrator | 2025-09-19 11:35:31.079597 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 11:35:31.079608 | orchestrator | Friday 19 September 2025 11:33:44 +0000 (0:00:00.063) 0:00:35.194 ****** 2025-09-19 11:35:31.079618 | orchestrator | 2025-09-19 11:35:31.079629 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 11:35:31.079639 | orchestrator | Friday 19 September 2025 11:33:44 +0000 (0:00:00.062) 0:00:35.256 ****** 2025-09-19 11:35:31.079650 | orchestrator | 2025-09-19 11:35:31.079660 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 11:35:31.079723 | orchestrator | Friday 19 September 2025 11:33:44 +0000 (0:00:00.067) 0:00:35.323 ****** 2025-09-19 11:35:31.079735 | orchestrator | 2025-09-19 11:35:31.079746 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 11:35:31.079756 | orchestrator | Friday 19 September 2025 11:33:44 +0000 (0:00:00.063) 0:00:35.387 ****** 2025-09-19 11:35:31.079767 | orchestrator | 2025-09-19 11:35:31.079778 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-19 11:35:31.079789 | orchestrator | Friday 19 September 2025 11:33:44 +0000 (0:00:00.067) 0:00:35.454 ****** 2025-09-19 11:35:31.079799 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.079810 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:31.079821 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.079832 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:31.079842 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.079853 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:31.079863 | orchestrator | 2025-09-19 11:35:31.079874 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-19 11:35:31.079885 | orchestrator | Friday 19 September 2025 11:33:45 +0000 (0:00:01.512) 0:00:36.967 ****** 2025-09-19 11:35:31.079896 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:31.079913 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:31.079932 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:31.079951 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:31.079970 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:31.079988 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:31.080006 | orchestrator | 2025-09-19 11:35:31.080024 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-19 11:35:31.080042 | orchestrator | 2025-09-19 11:35:31.080061 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 11:35:31.080080 | orchestrator | Friday 19 September 2025 11:34:16 +0000 (0:00:30.262) 0:01:07.230 ****** 2025-09-19 11:35:31.080099 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:35:31.080118 | orchestrator | 2025-09-19 11:35:31.080131 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 11:35:31.080141 | orchestrator | Friday 19 September 2025 11:34:16 +0000 (0:00:00.665) 0:01:07.895 ****** 2025-09-19 11:35:31.080161 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:35:31.080171 | orchestrator | 2025-09-19 11:35:31.080181 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-19 11:35:31.080190 | orchestrator | Friday 19 September 2025 11:34:17 +0000 (0:00:00.523) 0:01:08.419 ****** 2025-09-19 11:35:31.080200 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.080209 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.080218 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.080228 | orchestrator | 2025-09-19 11:35:31.080238 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-19 11:35:31.080247 | orchestrator | Friday 19 September 2025 11:34:18 +0000 (0:00:00.997) 0:01:09.416 ****** 2025-09-19 11:35:31.080256 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.080266 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.080275 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.080291 | orchestrator | 2025-09-19 11:35:31.080302 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-19 11:35:31.080311 | orchestrator | Friday 19 September 2025 11:34:18 +0000 (0:00:00.396) 0:01:09.812 ****** 2025-09-19 11:35:31.080320 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.080330 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.080339 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.080349 | orchestrator | 2025-09-19 11:35:31.080359 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-19 11:35:31.080368 | orchestrator | Friday 19 September 2025 11:34:19 +0000 (0:00:00.325) 0:01:10.138 ****** 2025-09-19 11:35:31.080377 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.080387 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.080396 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.080406 | orchestrator | 2025-09-19 11:35:31.080415 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-19 11:35:31.080425 | orchestrator | Friday 19 September 2025 11:34:19 +0000 (0:00:00.422) 0:01:10.561 ****** 2025-09-19 11:35:31.080434 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.080443 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.080453 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.080462 | orchestrator | 2025-09-19 11:35:31.080472 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-19 11:35:31.080481 | orchestrator | Friday 19 September 2025 11:34:20 +0000 (0:00:00.607) 0:01:11.168 ****** 2025-09-19 11:35:31.080491 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.080501 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.080510 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.080520 | orchestrator | 2025-09-19 11:35:31.080529 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-19 11:35:31.080539 | orchestrator | Friday 19 September 2025 11:34:20 +0000 (0:00:00.327) 0:01:11.496 ****** 2025-09-19 11:35:31.080548 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.080557 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.080567 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.080576 | orchestrator | 2025-09-19 11:35:31.080586 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-19 11:35:31.080596 | orchestrator | Friday 19 September 2025 11:34:20 +0000 (0:00:00.322) 0:01:11.818 ****** 2025-09-19 11:35:31.080605 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.080615 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.080624 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.080634 | orchestrator | 2025-09-19 11:35:31.080643 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-19 11:35:31.080653 | orchestrator | Friday 19 September 2025 11:34:21 +0000 (0:00:00.296) 0:01:12.115 ****** 2025-09-19 11:35:31.080663 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.080689 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.080706 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.080716 | orchestrator | 2025-09-19 11:35:31.080725 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-19 11:35:31.080735 | orchestrator | Friday 19 September 2025 11:34:21 +0000 (0:00:00.496) 0:01:12.612 ****** 2025-09-19 11:35:31.080744 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.080754 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.080763 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.080772 | orchestrator | 2025-09-19 11:35:31.080782 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-19 11:35:31.080791 | orchestrator | Friday 19 September 2025 11:34:21 +0000 (0:00:00.285) 0:01:12.897 ****** 2025-09-19 11:35:31.080800 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.080810 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.080820 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.080829 | orchestrator | 2025-09-19 11:35:31.080838 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-19 11:35:31.080848 | orchestrator | Friday 19 September 2025 11:34:22 +0000 (0:00:00.296) 0:01:13.194 ****** 2025-09-19 11:35:31.080857 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.080867 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.080876 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.080885 | orchestrator | 2025-09-19 11:35:31.080900 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-19 11:35:31.080910 | orchestrator | Friday 19 September 2025 11:34:22 +0000 (0:00:00.296) 0:01:13.491 ****** 2025-09-19 11:35:31.080919 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.080929 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.080938 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.080947 | orchestrator | 2025-09-19 11:35:31.080957 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-19 11:35:31.080967 | orchestrator | Friday 19 September 2025 11:34:22 +0000 (0:00:00.290) 0:01:13.781 ****** 2025-09-19 11:35:31.080976 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.080985 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.080995 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.081004 | orchestrator | 2025-09-19 11:35:31.081014 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-19 11:35:31.081023 | orchestrator | Friday 19 September 2025 11:34:23 +0000 (0:00:00.510) 0:01:14.291 ****** 2025-09-19 11:35:31.081032 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.081042 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.081051 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.081060 | orchestrator | 2025-09-19 11:35:31.081070 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-19 11:35:31.081079 | orchestrator | Friday 19 September 2025 11:34:23 +0000 (0:00:00.331) 0:01:14.623 ****** 2025-09-19 11:35:31.081088 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.081098 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.081107 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.081117 | orchestrator | 2025-09-19 11:35:31.081126 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-19 11:35:31.081135 | orchestrator | Friday 19 September 2025 11:34:23 +0000 (0:00:00.295) 0:01:14.918 ****** 2025-09-19 11:35:31.081145 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.081154 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.081170 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.081180 | orchestrator | 2025-09-19 11:35:31.081189 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 11:35:31.081199 | orchestrator | Friday 19 September 2025 11:34:24 +0000 (0:00:00.311) 0:01:15.230 ****** 2025-09-19 11:35:31.081208 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:35:31.081224 | orchestrator | 2025-09-19 11:35:31.081233 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-19 11:35:31.081243 | orchestrator | Friday 19 September 2025 11:34:24 +0000 (0:00:00.790) 0:01:16.021 ****** 2025-09-19 11:35:31.081252 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.081262 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.081271 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.081281 | orchestrator | 2025-09-19 11:35:31.081290 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-19 11:35:31.081300 | orchestrator | Friday 19 September 2025 11:34:25 +0000 (0:00:00.459) 0:01:16.481 ****** 2025-09-19 11:35:31.081309 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.081319 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.081328 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.081337 | orchestrator | 2025-09-19 11:35:31.081347 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-19 11:35:31.081357 | orchestrator | Friday 19 September 2025 11:34:25 +0000 (0:00:00.521) 0:01:17.002 ****** 2025-09-19 11:35:31.081366 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.081376 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.081385 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.081394 | orchestrator | 2025-09-19 11:35:31.081404 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-19 11:35:31.081413 | orchestrator | Friday 19 September 2025 11:34:26 +0000 (0:00:00.480) 0:01:17.482 ****** 2025-09-19 11:35:31.081422 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.081432 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.081441 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.081450 | orchestrator | 2025-09-19 11:35:31.081460 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-19 11:35:31.081469 | orchestrator | Friday 19 September 2025 11:34:26 +0000 (0:00:00.291) 0:01:17.774 ****** 2025-09-19 11:35:31.081478 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.081488 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.081497 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.081507 | orchestrator | 2025-09-19 11:35:31.081516 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-19 11:35:31.081526 | orchestrator | Friday 19 September 2025 11:34:27 +0000 (0:00:00.322) 0:01:18.097 ****** 2025-09-19 11:35:31.081535 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.081544 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.081554 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.081564 | orchestrator | 2025-09-19 11:35:31.081573 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-19 11:35:31.081583 | orchestrator | Friday 19 September 2025 11:34:27 +0000 (0:00:00.293) 0:01:18.391 ****** 2025-09-19 11:35:31.081592 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.081602 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.081611 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.081621 | orchestrator | 2025-09-19 11:35:31.081630 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-19 11:35:31.081640 | orchestrator | Friday 19 September 2025 11:34:27 +0000 (0:00:00.429) 0:01:18.820 ****** 2025-09-19 11:35:31.081649 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.081659 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.081685 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.081695 | orchestrator | 2025-09-19 11:35:31.081705 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-19 11:35:31.081714 | orchestrator | Friday 19 September 2025 11:34:28 +0000 (0:00:00.293) 0:01:19.114 ****** 2025-09-19 11:35:31.081733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081839 | orchestrator | 2025-09-19 11:35:31.081849 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-19 11:35:31.081859 | orchestrator | Friday 19 September 2025 11:34:29 +0000 (0:00:01.612) 0:01:20.726 ****** 2025-09-19 11:35:31.081869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.081973 | orchestrator | 2025-09-19 11:35:31.081983 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-19 11:35:31.081992 | orchestrator | Friday 19 September 2025 11:34:35 +0000 (0:00:05.387) 0:01:26.114 ****** 2025-09-19 11:35:31.082002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.082075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.082094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.082105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.082114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.082131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.082142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.082151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.082161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.082171 | orchestrator | 2025-09-19 11:35:31.082180 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 11:35:31.082190 | orchestrator | Friday 19 September 2025 11:34:37 +0000 (0:00:02.448) 0:01:28.562 ****** 2025-09-19 11:35:31.082200 | orchestrator | 2025-09-19 11:35:31.082209 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 11:35:31.082219 | orchestrator | Friday 19 September 2025 11:34:37 +0000 (0:00:00.073) 0:01:28.636 ****** 2025-09-19 11:35:31.082228 | orchestrator | 2025-09-19 11:35:31.082238 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 11:35:31.082247 | orchestrator | Friday 19 September 2025 11:34:37 +0000 (0:00:00.076) 0:01:28.713 ****** 2025-09-19 11:35:31.082263 | orchestrator | 2025-09-19 11:35:31.082273 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-19 11:35:31.082282 | orchestrator | Friday 19 September 2025 11:34:37 +0000 (0:00:00.078) 0:01:28.791 ****** 2025-09-19 11:35:31.082292 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:31.082302 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:31.082311 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:31.082321 | orchestrator | 2025-09-19 11:35:31.082330 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-19 11:35:31.082340 | orchestrator | Friday 19 September 2025 11:34:45 +0000 (0:00:07.793) 0:01:36.585 ****** 2025-09-19 11:35:31.082349 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:31.082359 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:31.082368 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:31.082378 | orchestrator | 2025-09-19 11:35:31.082387 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-19 11:35:31.082397 | orchestrator | Friday 19 September 2025 11:34:48 +0000 (0:00:02.646) 0:01:39.231 ****** 2025-09-19 11:35:31.082406 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:31.082416 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:31.082426 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:31.082435 | orchestrator | 2025-09-19 11:35:31.082445 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-19 11:35:31.082454 | orchestrator | Friday 19 September 2025 11:34:50 +0000 (0:00:02.447) 0:01:41.679 ****** 2025-09-19 11:35:31.082468 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.082478 | orchestrator | 2025-09-19 11:35:31.082488 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-19 11:35:31.082497 | orchestrator | Friday 19 September 2025 11:34:50 +0000 (0:00:00.311) 0:01:41.990 ****** 2025-09-19 11:35:31.082507 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.082516 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.082526 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.082535 | orchestrator | 2025-09-19 11:35:31.082545 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-19 11:35:31.082554 | orchestrator | Friday 19 September 2025 11:34:51 +0000 (0:00:00.778) 0:01:42.769 ****** 2025-09-19 11:35:31.082564 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.082573 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.082583 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:31.082592 | orchestrator | 2025-09-19 11:35:31.082602 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-19 11:35:31.082611 | orchestrator | Friday 19 September 2025 11:34:52 +0000 (0:00:00.642) 0:01:43.411 ****** 2025-09-19 11:35:31.082621 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.082631 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.082640 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.082650 | orchestrator | 2025-09-19 11:35:31.082659 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-19 11:35:31.082715 | orchestrator | Friday 19 September 2025 11:34:53 +0000 (0:00:00.802) 0:01:44.214 ****** 2025-09-19 11:35:31.082727 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.082736 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.082745 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:31.082755 | orchestrator | 2025-09-19 11:35:31.082764 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-19 11:35:31.082774 | orchestrator | Friday 19 September 2025 11:34:53 +0000 (0:00:00.714) 0:01:44.928 ****** 2025-09-19 11:35:31.082784 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.082793 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.082809 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.082819 | orchestrator | 2025-09-19 11:35:31.082829 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-19 11:35:31.082838 | orchestrator | Friday 19 September 2025 11:34:55 +0000 (0:00:01.240) 0:01:46.168 ****** 2025-09-19 11:35:31.082855 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.082864 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.082873 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.082883 | orchestrator | 2025-09-19 11:35:31.082892 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-19 11:35:31.082902 | orchestrator | Friday 19 September 2025 11:34:55 +0000 (0:00:00.821) 0:01:46.990 ****** 2025-09-19 11:35:31.082912 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.082921 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.082930 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.082940 | orchestrator | 2025-09-19 11:35:31.082949 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-19 11:35:31.082959 | orchestrator | Friday 19 September 2025 11:34:56 +0000 (0:00:00.294) 0:01:47.284 ****** 2025-09-19 11:35:31.082969 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.082978 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.082986 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.082994 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083003 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083015 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083024 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083032 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083051 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083059 | orchestrator | 2025-09-19 11:35:31.083067 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-19 11:35:31.083075 | orchestrator | Friday 19 September 2025 11:34:57 +0000 (0:00:01.413) 0:01:48.698 ****** 2025-09-19 11:35:31.083083 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083091 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083100 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083108 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083136 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083167 | orchestrator | 2025-09-19 11:35:31.083175 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-19 11:35:31.083182 | orchestrator | Friday 19 September 2025 11:35:02 +0000 (0:00:04.482) 0:01:53.181 ****** 2025-09-19 11:35:31.083196 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083205 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083212 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083229 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083270 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:35:31.083278 | orchestrator | 2025-09-19 11:35:31.083286 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 11:35:31.083294 | orchestrator | Friday 19 September 2025 11:35:05 +0000 (0:00:02.921) 0:01:56.102 ****** 2025-09-19 11:35:31.083302 | orchestrator | 2025-09-19 11:35:31.083310 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 11:35:31.083318 | orchestrator | Friday 19 September 2025 11:35:05 +0000 (0:00:00.124) 0:01:56.226 ****** 2025-09-19 11:35:31.083326 | orchestrator | 2025-09-19 11:35:31.083333 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 11:35:31.083341 | orchestrator | Friday 19 September 2025 11:35:05 +0000 (0:00:00.124) 0:01:56.351 ****** 2025-09-19 11:35:31.083349 | orchestrator | 2025-09-19 11:35:31.083357 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-19 11:35:31.083365 | orchestrator | Friday 19 September 2025 11:35:05 +0000 (0:00:00.069) 0:01:56.420 ****** 2025-09-19 11:35:31.083372 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:31.083380 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:31.083388 | orchestrator | 2025-09-19 11:35:31.083400 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-19 11:35:31.083408 | orchestrator | Friday 19 September 2025 11:35:11 +0000 (0:00:06.504) 0:02:02.925 ****** 2025-09-19 11:35:31.083416 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:31.083424 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:31.083432 | orchestrator | 2025-09-19 11:35:31.083439 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-19 11:35:31.083447 | orchestrator | Friday 19 September 2025 11:35:18 +0000 (0:00:06.294) 0:02:09.219 ****** 2025-09-19 11:35:31.083455 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:31.083463 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:31.083470 | orchestrator | 2025-09-19 11:35:31.083478 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-19 11:35:31.083486 | orchestrator | Friday 19 September 2025 11:35:25 +0000 (0:00:06.986) 0:02:16.206 ****** 2025-09-19 11:35:31.083494 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:31.083501 | orchestrator | 2025-09-19 11:35:31.083509 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-19 11:35:31.083517 | orchestrator | Friday 19 September 2025 11:35:25 +0000 (0:00:00.146) 0:02:16.352 ****** 2025-09-19 11:35:31.083525 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.083533 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.083540 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.083548 | orchestrator | 2025-09-19 11:35:31.083556 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-19 11:35:31.083563 | orchestrator | Friday 19 September 2025 11:35:26 +0000 (0:00:00.812) 0:02:17.165 ****** 2025-09-19 11:35:31.083571 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.083579 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.083587 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:31.083594 | orchestrator | 2025-09-19 11:35:31.083602 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-19 11:35:31.083610 | orchestrator | Friday 19 September 2025 11:35:26 +0000 (0:00:00.653) 0:02:17.818 ****** 2025-09-19 11:35:31.083618 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.083625 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.083633 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.083641 | orchestrator | 2025-09-19 11:35:31.083649 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-19 11:35:31.083665 | orchestrator | Friday 19 September 2025 11:35:27 +0000 (0:00:00.813) 0:02:18.632 ****** 2025-09-19 11:35:31.083687 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:31.083695 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:31.083703 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:31.083711 | orchestrator | 2025-09-19 11:35:31.083719 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-19 11:35:31.083727 | orchestrator | Friday 19 September 2025 11:35:28 +0000 (0:00:00.650) 0:02:19.282 ****** 2025-09-19 11:35:31.083734 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.083742 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.083750 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.083757 | orchestrator | 2025-09-19 11:35:31.083765 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-19 11:35:31.083773 | orchestrator | Friday 19 September 2025 11:35:29 +0000 (0:00:00.764) 0:02:20.047 ****** 2025-09-19 11:35:31.083781 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:31.083789 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:31.083796 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:31.083804 | orchestrator | 2025-09-19 11:35:31.083812 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:35:31.083820 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-19 11:35:31.083828 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-19 11:35:31.083839 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-19 11:35:31.083847 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:35:31.083855 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:35:31.083863 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:35:31.083871 | orchestrator | 2025-09-19 11:35:31.083878 | orchestrator | 2025-09-19 11:35:31.083886 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:35:31.083894 | orchestrator | Friday 19 September 2025 11:35:30 +0000 (0:00:01.036) 0:02:21.083 ****** 2025-09-19 11:35:31.083902 | orchestrator | =============================================================================== 2025-09-19 11:35:31.083909 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 30.26s 2025-09-19 11:35:31.083917 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.47s 2025-09-19 11:35:31.083924 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.30s 2025-09-19 11:35:31.083932 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.43s 2025-09-19 11:35:31.083940 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.94s 2025-09-19 11:35:31.083947 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.39s 2025-09-19 11:35:31.083955 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.48s 2025-09-19 11:35:31.083968 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.92s 2025-09-19 11:35:31.083976 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.60s 2025-09-19 11:35:31.083984 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.45s 2025-09-19 11:35:31.083991 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.00s 2025-09-19 11:35:31.083999 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.88s 2025-09-19 11:35:31.084012 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.85s 2025-09-19 11:35:31.084020 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.62s 2025-09-19 11:35:31.084028 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.61s 2025-09-19 11:35:31.084036 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.51s 2025-09-19 11:35:31.084043 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.41s 2025-09-19 11:35:31.084051 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.24s 2025-09-19 11:35:31.084059 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.24s 2025-09-19 11:35:31.084066 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.15s 2025-09-19 11:35:31.084074 | orchestrator | 2025-09-19 11:35:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:34.126627 | orchestrator | 2025-09-19 11:35:34 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:34.126810 | orchestrator | 2025-09-19 11:35:34 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:34.126837 | orchestrator | 2025-09-19 11:35:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:37.171625 | orchestrator | 2025-09-19 11:35:37 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:37.173329 | orchestrator | 2025-09-19 11:35:37 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:37.173568 | orchestrator | 2025-09-19 11:35:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:40.212747 | orchestrator | 2025-09-19 11:35:40 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:40.213291 | orchestrator | 2025-09-19 11:35:40 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:40.213307 | orchestrator | 2025-09-19 11:35:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:43.255042 | orchestrator | 2025-09-19 11:35:43 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:43.256625 | orchestrator | 2025-09-19 11:35:43 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:43.256867 | orchestrator | 2025-09-19 11:35:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:46.298408 | orchestrator | 2025-09-19 11:35:46 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:46.299898 | orchestrator | 2025-09-19 11:35:46 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:46.299925 | orchestrator | 2025-09-19 11:35:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:49.336023 | orchestrator | 2025-09-19 11:35:49 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:49.336121 | orchestrator | 2025-09-19 11:35:49 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:49.336136 | orchestrator | 2025-09-19 11:35:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:52.376219 | orchestrator | 2025-09-19 11:35:52 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:52.376939 | orchestrator | 2025-09-19 11:35:52 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:52.378369 | orchestrator | 2025-09-19 11:35:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:55.415742 | orchestrator | 2025-09-19 11:35:55 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:55.416987 | orchestrator | 2025-09-19 11:35:55 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:55.417070 | orchestrator | 2025-09-19 11:35:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:58.468875 | orchestrator | 2025-09-19 11:35:58 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:35:58.469970 | orchestrator | 2025-09-19 11:35:58 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:35:58.470001 | orchestrator | 2025-09-19 11:35:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:01.511848 | orchestrator | 2025-09-19 11:36:01 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:01.513759 | orchestrator | 2025-09-19 11:36:01 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:01.513842 | orchestrator | 2025-09-19 11:36:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:04.552584 | orchestrator | 2025-09-19 11:36:04 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:04.553325 | orchestrator | 2025-09-19 11:36:04 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:04.553353 | orchestrator | 2025-09-19 11:36:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:07.598503 | orchestrator | 2025-09-19 11:36:07 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:07.599784 | orchestrator | 2025-09-19 11:36:07 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:07.599812 | orchestrator | 2025-09-19 11:36:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:10.653803 | orchestrator | 2025-09-19 11:36:10 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:10.654117 | orchestrator | 2025-09-19 11:36:10 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:10.654336 | orchestrator | 2025-09-19 11:36:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:13.688020 | orchestrator | 2025-09-19 11:36:13 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:13.688493 | orchestrator | 2025-09-19 11:36:13 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:13.689281 | orchestrator | 2025-09-19 11:36:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:16.718064 | orchestrator | 2025-09-19 11:36:16 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:16.718525 | orchestrator | 2025-09-19 11:36:16 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:16.718556 | orchestrator | 2025-09-19 11:36:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:19.761119 | orchestrator | 2025-09-19 11:36:19 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:19.762786 | orchestrator | 2025-09-19 11:36:19 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:19.763189 | orchestrator | 2025-09-19 11:36:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:22.812738 | orchestrator | 2025-09-19 11:36:22 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:22.813969 | orchestrator | 2025-09-19 11:36:22 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:22.814009 | orchestrator | 2025-09-19 11:36:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:25.857622 | orchestrator | 2025-09-19 11:36:25 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:25.860023 | orchestrator | 2025-09-19 11:36:25 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:25.860384 | orchestrator | 2025-09-19 11:36:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:28.920727 | orchestrator | 2025-09-19 11:36:28 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:28.924278 | orchestrator | 2025-09-19 11:36:28 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:28.924435 | orchestrator | 2025-09-19 11:36:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:31.962482 | orchestrator | 2025-09-19 11:36:31 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:31.962735 | orchestrator | 2025-09-19 11:36:31 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:31.962845 | orchestrator | 2025-09-19 11:36:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:35.005935 | orchestrator | 2025-09-19 11:36:35 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:35.010210 | orchestrator | 2025-09-19 11:36:35 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:35.010262 | orchestrator | 2025-09-19 11:36:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:38.055934 | orchestrator | 2025-09-19 11:36:38 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:38.057530 | orchestrator | 2025-09-19 11:36:38 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:38.057866 | orchestrator | 2025-09-19 11:36:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:41.098624 | orchestrator | 2025-09-19 11:36:41 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:41.099309 | orchestrator | 2025-09-19 11:36:41 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:41.099340 | orchestrator | 2025-09-19 11:36:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:44.139833 | orchestrator | 2025-09-19 11:36:44 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:44.142256 | orchestrator | 2025-09-19 11:36:44 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:44.142304 | orchestrator | 2025-09-19 11:36:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:47.192598 | orchestrator | 2025-09-19 11:36:47 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:47.193043 | orchestrator | 2025-09-19 11:36:47 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:47.193191 | orchestrator | 2025-09-19 11:36:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:50.242988 | orchestrator | 2025-09-19 11:36:50 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:50.243671 | orchestrator | 2025-09-19 11:36:50 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:50.243702 | orchestrator | 2025-09-19 11:36:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:53.282795 | orchestrator | 2025-09-19 11:36:53 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:53.284410 | orchestrator | 2025-09-19 11:36:53 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:53.284786 | orchestrator | 2025-09-19 11:36:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:56.334851 | orchestrator | 2025-09-19 11:36:56 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:56.335937 | orchestrator | 2025-09-19 11:36:56 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:56.335968 | orchestrator | 2025-09-19 11:36:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:59.384103 | orchestrator | 2025-09-19 11:36:59 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:36:59.385829 | orchestrator | 2025-09-19 11:36:59 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:36:59.386304 | orchestrator | 2025-09-19 11:36:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:02.442400 | orchestrator | 2025-09-19 11:37:02 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:02.443354 | orchestrator | 2025-09-19 11:37:02 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:02.443402 | orchestrator | 2025-09-19 11:37:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:05.490753 | orchestrator | 2025-09-19 11:37:05 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:05.490850 | orchestrator | 2025-09-19 11:37:05 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:05.491142 | orchestrator | 2025-09-19 11:37:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:08.538268 | orchestrator | 2025-09-19 11:37:08 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:08.540220 | orchestrator | 2025-09-19 11:37:08 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:08.540259 | orchestrator | 2025-09-19 11:37:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:11.573305 | orchestrator | 2025-09-19 11:37:11 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:11.574922 | orchestrator | 2025-09-19 11:37:11 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:11.574961 | orchestrator | 2025-09-19 11:37:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:14.611773 | orchestrator | 2025-09-19 11:37:14 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:14.612375 | orchestrator | 2025-09-19 11:37:14 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:14.612412 | orchestrator | 2025-09-19 11:37:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:17.652459 | orchestrator | 2025-09-19 11:37:17 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:17.654184 | orchestrator | 2025-09-19 11:37:17 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:17.654328 | orchestrator | 2025-09-19 11:37:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:20.699873 | orchestrator | 2025-09-19 11:37:20 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:20.700070 | orchestrator | 2025-09-19 11:37:20 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:20.700091 | orchestrator | 2025-09-19 11:37:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:23.754157 | orchestrator | 2025-09-19 11:37:23 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:23.757140 | orchestrator | 2025-09-19 11:37:23 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:23.757193 | orchestrator | 2025-09-19 11:37:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:26.811559 | orchestrator | 2025-09-19 11:37:26 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:26.812140 | orchestrator | 2025-09-19 11:37:26 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:26.812180 | orchestrator | 2025-09-19 11:37:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:29.862563 | orchestrator | 2025-09-19 11:37:29 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:29.864610 | orchestrator | 2025-09-19 11:37:29 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:29.864642 | orchestrator | 2025-09-19 11:37:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:32.896616 | orchestrator | 2025-09-19 11:37:32 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:32.896863 | orchestrator | 2025-09-19 11:37:32 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:32.896888 | orchestrator | 2025-09-19 11:37:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:35.955627 | orchestrator | 2025-09-19 11:37:35 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:35.956500 | orchestrator | 2025-09-19 11:37:35 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:35.956784 | orchestrator | 2025-09-19 11:37:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:39.004922 | orchestrator | 2025-09-19 11:37:39 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:39.006946 | orchestrator | 2025-09-19 11:37:39 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:39.007062 | orchestrator | 2025-09-19 11:37:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:42.057921 | orchestrator | 2025-09-19 11:37:42 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:42.059576 | orchestrator | 2025-09-19 11:37:42 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:42.060057 | orchestrator | 2025-09-19 11:37:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:45.103266 | orchestrator | 2025-09-19 11:37:45 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:45.104230 | orchestrator | 2025-09-19 11:37:45 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:45.104274 | orchestrator | 2025-09-19 11:37:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:48.176044 | orchestrator | 2025-09-19 11:37:48 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:48.177566 | orchestrator | 2025-09-19 11:37:48 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:48.177602 | orchestrator | 2025-09-19 11:37:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:51.228802 | orchestrator | 2025-09-19 11:37:51 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:51.231233 | orchestrator | 2025-09-19 11:37:51 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:51.231277 | orchestrator | 2025-09-19 11:37:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:54.276667 | orchestrator | 2025-09-19 11:37:54 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:54.276761 | orchestrator | 2025-09-19 11:37:54 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:54.276775 | orchestrator | 2025-09-19 11:37:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:57.310989 | orchestrator | 2025-09-19 11:37:57 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:37:57.311251 | orchestrator | 2025-09-19 11:37:57 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:37:57.311293 | orchestrator | 2025-09-19 11:37:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:00.366845 | orchestrator | 2025-09-19 11:38:00 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:00.368237 | orchestrator | 2025-09-19 11:38:00 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:38:00.368557 | orchestrator | 2025-09-19 11:38:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:03.406894 | orchestrator | 2025-09-19 11:38:03 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:03.406990 | orchestrator | 2025-09-19 11:38:03 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:38:03.407341 | orchestrator | 2025-09-19 11:38:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:06.449803 | orchestrator | 2025-09-19 11:38:06 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:06.449943 | orchestrator | 2025-09-19 11:38:06 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:38:06.449973 | orchestrator | 2025-09-19 11:38:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:09.499021 | orchestrator | 2025-09-19 11:38:09 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:09.500160 | orchestrator | 2025-09-19 11:38:09 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:38:09.500199 | orchestrator | 2025-09-19 11:38:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:12.540400 | orchestrator | 2025-09-19 11:38:12 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:12.540823 | orchestrator | 2025-09-19 11:38:12 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:38:12.541089 | orchestrator | 2025-09-19 11:38:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:15.576024 | orchestrator | 2025-09-19 11:38:15 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:15.577580 | orchestrator | 2025-09-19 11:38:15 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:38:15.577914 | orchestrator | 2025-09-19 11:38:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:18.627765 | orchestrator | 2025-09-19 11:38:18 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:18.630078 | orchestrator | 2025-09-19 11:38:18 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state STARTED 2025-09-19 11:38:18.630745 | orchestrator | 2025-09-19 11:38:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:21.681212 | orchestrator | 2025-09-19 11:38:21 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:21.691148 | orchestrator | 2025-09-19 11:38:21 | INFO  | Task 8b21659f-6e5f-4ea9-abe2-d76570beb30d is in state SUCCESS 2025-09-19 11:38:21.693727 | orchestrator | 2025-09-19 11:38:21.693799 | orchestrator | 2025-09-19 11:38:21.693814 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:38:21.693827 | orchestrator | 2025-09-19 11:38:21.693838 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:38:21.693849 | orchestrator | Friday 19 September 2025 11:31:54 +0000 (0:00:00.249) 0:00:00.249 ****** 2025-09-19 11:38:21.693860 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:38:21.693872 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:38:21.693883 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:38:21.693894 | orchestrator | 2025-09-19 11:38:21.693905 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:38:21.693915 | orchestrator | Friday 19 September 2025 11:31:55 +0000 (0:00:00.544) 0:00:00.794 ****** 2025-09-19 11:38:21.693926 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-19 11:38:21.693937 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-19 11:38:21.693948 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-19 11:38:21.693958 | orchestrator | 2025-09-19 11:38:21.694110 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-19 11:38:21.694127 | orchestrator | 2025-09-19 11:38:21.694137 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-19 11:38:21.694148 | orchestrator | Friday 19 September 2025 11:31:55 +0000 (0:00:00.523) 0:00:01.317 ****** 2025-09-19 11:38:21.694159 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.694170 | orchestrator | 2025-09-19 11:38:21.694180 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-19 11:38:21.694191 | orchestrator | Friday 19 September 2025 11:31:56 +0000 (0:00:00.995) 0:00:02.312 ****** 2025-09-19 11:38:21.694202 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:38:21.694213 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:38:21.694224 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:38:21.694234 | orchestrator | 2025-09-19 11:38:21.694245 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-19 11:38:21.694256 | orchestrator | Friday 19 September 2025 11:31:57 +0000 (0:00:00.805) 0:00:03.118 ****** 2025-09-19 11:38:21.694267 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.694277 | orchestrator | 2025-09-19 11:38:21.694288 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-19 11:38:21.694480 | orchestrator | Friday 19 September 2025 11:31:58 +0000 (0:00:01.024) 0:00:04.142 ****** 2025-09-19 11:38:21.694493 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:38:21.694504 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:38:21.694515 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:38:21.694526 | orchestrator | 2025-09-19 11:38:21.694537 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-19 11:38:21.694547 | orchestrator | Friday 19 September 2025 11:31:59 +0000 (0:00:00.843) 0:00:04.985 ****** 2025-09-19 11:38:21.694558 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 11:38:21.694602 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 11:38:21.694612 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 11:38:21.694623 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 11:38:21.694634 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 11:38:21.694646 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 11:38:21.694657 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 11:38:21.694668 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 11:38:21.694698 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 11:38:21.694710 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 11:38:21.694720 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 11:38:21.694731 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 11:38:21.694742 | orchestrator | 2025-09-19 11:38:21.694752 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 11:38:21.694763 | orchestrator | Friday 19 September 2025 11:32:02 +0000 (0:00:03.186) 0:00:08.172 ****** 2025-09-19 11:38:21.694774 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-19 11:38:21.694785 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-19 11:38:21.694811 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-19 11:38:21.694822 | orchestrator | 2025-09-19 11:38:21.694833 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 11:38:21.694843 | orchestrator | Friday 19 September 2025 11:32:03 +0000 (0:00:00.787) 0:00:08.960 ****** 2025-09-19 11:38:21.694854 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-19 11:38:21.694864 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-19 11:38:21.694875 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-19 11:38:21.694886 | orchestrator | 2025-09-19 11:38:21.694896 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 11:38:21.694907 | orchestrator | Friday 19 September 2025 11:32:04 +0000 (0:00:01.357) 0:00:10.317 ****** 2025-09-19 11:38:21.694918 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-19 11:38:21.694929 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.694985 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-19 11:38:21.694999 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.695010 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-19 11:38:21.695021 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.695031 | orchestrator | 2025-09-19 11:38:21.695042 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-19 11:38:21.695053 | orchestrator | Friday 19 September 2025 11:32:05 +0000 (0:00:00.427) 0:00:10.745 ****** 2025-09-19 11:38:21.695120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 11:38:21.695139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 11:38:21.695151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 11:38:21.695171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:38:21.695188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:38:21.695208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:38:21.695221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:38:21.695232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:38:21.695243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:38:21.695255 | orchestrator | 2025-09-19 11:38:21.695265 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-19 11:38:21.695285 | orchestrator | Friday 19 September 2025 11:32:07 +0000 (0:00:02.449) 0:00:13.195 ****** 2025-09-19 11:38:21.695295 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.695306 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.695317 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.695328 | orchestrator | 2025-09-19 11:38:21.695338 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-19 11:38:21.695349 | orchestrator | Friday 19 September 2025 11:32:08 +0000 (0:00:01.123) 0:00:14.318 ****** 2025-09-19 11:38:21.695360 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-19 11:38:21.695370 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-19 11:38:21.695381 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-19 11:38:21.695391 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-19 11:38:21.695402 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-19 11:38:21.695455 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-19 11:38:21.695468 | orchestrator | 2025-09-19 11:38:21.695479 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-19 11:38:21.695489 | orchestrator | Friday 19 September 2025 11:32:12 +0000 (0:00:03.916) 0:00:18.235 ****** 2025-09-19 11:38:21.695500 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.695510 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.695521 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.695532 | orchestrator | 2025-09-19 11:38:21.695542 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-19 11:38:21.695639 | orchestrator | Friday 19 September 2025 11:32:13 +0000 (0:00:01.181) 0:00:19.417 ****** 2025-09-19 11:38:21.695651 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:38:21.695724 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:38:21.695737 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:38:21.695747 | orchestrator | 2025-09-19 11:38:21.695758 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-19 11:38:21.695769 | orchestrator | Friday 19 September 2025 11:32:16 +0000 (0:00:02.191) 0:00:21.609 ****** 2025-09-19 11:38:21.695785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.695806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.695819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.695839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d6beb0a63bbc579c231e6ab224757492881e6cd4', '__omit_place_holder__d6beb0a63bbc579c231e6ab224757492881e6cd4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 11:38:21.695850 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.695862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.695874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.695885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.695901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d6beb0a63bbc579c231e6ab224757492881e6cd4', '__omit_place_holder__d6beb0a63bbc579c231e6ab224757492881e6cd4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 11:38:21.695913 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.695932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.695950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.695962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.695973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d6beb0a63bbc579c231e6ab224757492881e6cd4', '__omit_place_holder__d6beb0a63bbc579c231e6ab224757492881e6cd4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 11:38:21.695984 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.695995 | orchestrator | 2025-09-19 11:38:21.696006 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-19 11:38:21.696016 | orchestrator | Friday 19 September 2025 11:32:17 +0000 (0:00:01.485) 0:00:23.094 ****** 2025-09-19 11:38:21.696055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 11:38:21.696074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 11:38:21.696093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 11:38:21.696149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:38:21.696185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.696199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d6beb0a63bbc579c231e6ab224757492881e6cd4', '__omit_place_holder__d6beb0a63bbc579c231e6ab224757492881e6cd4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 11:38:21.696210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:38:21.696221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.696237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d6beb0a63bbc579c231e6ab224757492881e6cd4', '__omit_place_holder__d6beb0a63bbc579c231e6ab224757492881e6cd4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 11:38:21.696281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:38:21.696396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.696410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d6beb0a63bbc579c231e6ab224757492881e6cd4', '__omit_place_holder__d6beb0a63bbc579c231e6ab224757492881e6cd4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 11:38:21.696439 | orchestrator | 2025-09-19 11:38:21.696451 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-19 11:38:21.696462 | orchestrator | Friday 19 September 2025 11:32:20 +0000 (0:00:03.259) 0:00:26.354 ****** 2025-09-19 11:38:21.696473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 11:38:21.696485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 11:38:21.696507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 11:38:21.696534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:38:21.696547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:38:21.696558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:38:21.696569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:38:21.696581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:38:21.696592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:38:21.696603 | orchestrator | 2025-09-19 11:38:21.696614 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-19 11:38:21.696625 | orchestrator | Friday 19 September 2025 11:32:24 +0000 (0:00:03.344) 0:00:29.698 ****** 2025-09-19 11:38:21.696635 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 11:38:21.696651 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 11:38:21.696670 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 11:38:21.696681 | orchestrator | 2025-09-19 11:38:21.696692 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-19 11:38:21.696702 | orchestrator | Friday 19 September 2025 11:32:26 +0000 (0:00:02.714) 0:00:32.413 ****** 2025-09-19 11:38:21.696713 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 11:38:21.696724 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 11:38:21.696735 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 11:38:21.696746 | orchestrator | 2025-09-19 11:38:21.698395 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-19 11:38:21.698468 | orchestrator | Friday 19 September 2025 11:32:32 +0000 (0:00:05.867) 0:00:38.281 ****** 2025-09-19 11:38:21.698481 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.698492 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.698503 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.698514 | orchestrator | 2025-09-19 11:38:21.698525 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-19 11:38:21.698536 | orchestrator | Friday 19 September 2025 11:32:34 +0000 (0:00:01.396) 0:00:39.677 ****** 2025-09-19 11:38:21.698547 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 11:38:21.698559 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 11:38:21.698570 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 11:38:21.698581 | orchestrator | 2025-09-19 11:38:21.698591 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-19 11:38:21.698602 | orchestrator | Friday 19 September 2025 11:32:36 +0000 (0:00:02.758) 0:00:42.435 ****** 2025-09-19 11:38:21.698612 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 11:38:21.698623 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 11:38:21.698634 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 11:38:21.698645 | orchestrator | 2025-09-19 11:38:21.698655 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-19 11:38:21.698666 | orchestrator | Friday 19 September 2025 11:32:40 +0000 (0:00:03.661) 0:00:46.097 ****** 2025-09-19 11:38:21.698764 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-19 11:38:21.698778 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-19 11:38:21.698789 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-19 11:38:21.698800 | orchestrator | 2025-09-19 11:38:21.698810 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-19 11:38:21.698821 | orchestrator | Friday 19 September 2025 11:32:42 +0000 (0:00:02.189) 0:00:48.286 ****** 2025-09-19 11:38:21.698832 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-19 11:38:21.698842 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-19 11:38:21.698853 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-19 11:38:21.698864 | orchestrator | 2025-09-19 11:38:21.698874 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-19 11:38:21.698885 | orchestrator | Friday 19 September 2025 11:32:45 +0000 (0:00:02.395) 0:00:50.682 ****** 2025-09-19 11:38:21.698895 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.698955 | orchestrator | 2025-09-19 11:38:21.698969 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-19 11:38:21.698981 | orchestrator | Friday 19 September 2025 11:32:45 +0000 (0:00:00.605) 0:00:51.287 ****** 2025-09-19 11:38:21.698994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 11:38:21.699014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 11:38:21.699037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 11:38:21.699049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:38:21.699061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:38:21.699073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:38:21.699091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:38:21.699103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:38:21.699123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:38:21.699134 | orchestrator | 2025-09-19 11:38:21.699145 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-19 11:38:21.699156 | orchestrator | Friday 19 September 2025 11:32:49 +0000 (0:00:04.147) 0:00:55.435 ****** 2025-09-19 11:38:21.699174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.699186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.699197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.699208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.699226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.699237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.699248 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.699259 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.699275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.699293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.699305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.699378 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.699390 | orchestrator | 2025-09-19 11:38:21.699400 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-19 11:38:21.699411 | orchestrator | Friday 19 September 2025 11:32:50 +0000 (0:00:00.703) 0:00:56.138 ****** 2025-09-19 11:38:21.699479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.699501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.699512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.699523 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.699540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.699560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.699572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.699583 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.699625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.699643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.699653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.699663 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.699673 | orchestrator | 2025-09-19 11:38:21.699683 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-19 11:38:21.699692 | orchestrator | Friday 19 September 2025 11:32:51 +0000 (0:00:00.818) 0:00:56.957 ****** 2025-09-19 11:38:21.699707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.699723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.699733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.699744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.699807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.699819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.699829 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.699839 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.699849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.699859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.699875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.699885 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.699895 | orchestrator | 2025-09-19 11:38:21.699904 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-19 11:38:21.699914 | orchestrator | Friday 19 September 2025 11:32:53 +0000 (0:00:01.723) 0:00:58.680 ****** 2025-09-19 11:38:21.700007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.700028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.700039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.700048 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.700058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.700068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.700083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.700093 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.700109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.700125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.700135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.700145 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.700155 | orchestrator | 2025-09-19 11:38:21.700165 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-19 11:38:21.700174 | orchestrator | Friday 19 September 2025 11:32:54 +0000 (0:00:01.803) 0:01:00.483 ****** 2025-09-19 11:38:21.700184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.700194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.700209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.700219 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.700234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.700249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.700260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.700269 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.700279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.700289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.700299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.700309 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.700318 | orchestrator | 2025-09-19 11:38:21.700328 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-19 11:38:21.700337 | orchestrator | Friday 19 September 2025 11:32:58 +0000 (0:00:03.039) 0:01:03.523 ****** 2025-09-19 11:38:21.700351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.700376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.700387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.700397 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.700407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.700417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.700444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.700490 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.700507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.700529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.700540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.700549 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.700559 | orchestrator | 2025-09-19 11:38:21.700568 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-19 11:38:21.700578 | orchestrator | Friday 19 September 2025 11:32:59 +0000 (0:00:01.452) 0:01:04.975 ****** 2025-09-19 11:38:21.700588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.700598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.700608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.700618 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.700627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.700647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.700665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.700675 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.700685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.700694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.700704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.700714 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.700723 | orchestrator | 2025-09-19 11:38:21.700733 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-19 11:38:21.700742 | orchestrator | Friday 19 September 2025 11:33:00 +0000 (0:00:00.739) 0:01:05.715 ****** 2025-09-19 11:38:21.700752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.700772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.700783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.700793 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.700808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.700818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.700828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.700851 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.700861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:38:21.700871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:38:21.700902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:38:21.700949 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.700960 | orchestrator | 2025-09-19 11:38:21.700970 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-19 11:38:21.700980 | orchestrator | Friday 19 September 2025 11:33:01 +0000 (0:00:01.063) 0:01:06.779 ****** 2025-09-19 11:38:21.700989 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 11:38:21.700999 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 11:38:21.701014 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 11:38:21.701024 | orchestrator | 2025-09-19 11:38:21.701033 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-19 11:38:21.701043 | orchestrator | Friday 19 September 2025 11:33:02 +0000 (0:00:01.702) 0:01:08.481 ****** 2025-09-19 11:38:21.701052 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 11:38:21.701062 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 11:38:21.701071 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 11:38:21.701081 | orchestrator | 2025-09-19 11:38:21.701090 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-19 11:38:21.701099 | orchestrator | Friday 19 September 2025 11:33:04 +0000 (0:00:01.689) 0:01:10.171 ****** 2025-09-19 11:38:21.701109 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 11:38:21.701118 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 11:38:21.701128 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 11:38:21.701137 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 11:38:21.701147 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.701156 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 11:38:21.701222 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.701234 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 11:38:21.701243 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.701253 | orchestrator | 2025-09-19 11:38:21.701262 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-19 11:38:21.701271 | orchestrator | Friday 19 September 2025 11:33:05 +0000 (0:00:01.313) 0:01:11.485 ****** 2025-09-19 11:38:21.701281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 11:38:21.701298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 11:38:21.701313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 11:38:21.701330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:38:21.701341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:38:21.701351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:38:21.701361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:38:21.701377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:38:21.701387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:38:21.701397 | orchestrator | 2025-09-19 11:38:21.701406 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-19 11:38:21.701416 | orchestrator | Friday 19 September 2025 11:33:08 +0000 (0:00:02.727) 0:01:14.213 ****** 2025-09-19 11:38:21.701476 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.701486 | orchestrator | 2025-09-19 11:38:21.701496 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-19 11:38:21.701505 | orchestrator | Friday 19 September 2025 11:33:09 +0000 (0:00:01.025) 0:01:15.238 ****** 2025-09-19 11:38:21.701521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 11:38:21.701539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.701550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.701560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.701577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 11:38:21.701587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.701601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.704808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.704843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 11:38:21.704862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.704870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.704879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.704887 | orchestrator | 2025-09-19 11:38:21.704895 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-19 11:38:21.704903 | orchestrator | Friday 19 September 2025 11:33:16 +0000 (0:00:06.371) 0:01:21.609 ****** 2025-09-19 11:38:21.704916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 11:38:21.704933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.704942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.704954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.704962 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.704971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 11:38:21.704979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.704990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.704998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.705006 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.705020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 11:38:21.705032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.705039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.705046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.705053 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.705059 | orchestrator | 2025-09-19 11:38:21.705066 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-19 11:38:21.705073 | orchestrator | Friday 19 September 2025 11:33:16 +0000 (0:00:00.867) 0:01:22.476 ****** 2025-09-19 11:38:21.705080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 11:38:21.705088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 11:38:21.705095 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.705102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 11:38:21.705111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 11:38:21.705118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 11:38:21.705129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 11:38:21.705136 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.705142 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.705149 | orchestrator | 2025-09-19 11:38:21.705159 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-19 11:38:21.705170 | orchestrator | Friday 19 September 2025 11:33:18 +0000 (0:00:01.221) 0:01:23.698 ****** 2025-09-19 11:38:21.705177 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.705183 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.705190 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.705196 | orchestrator | 2025-09-19 11:38:21.705203 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-19 11:38:21.705210 | orchestrator | Friday 19 September 2025 11:33:19 +0000 (0:00:01.424) 0:01:25.122 ****** 2025-09-19 11:38:21.705216 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.705223 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.705230 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.705236 | orchestrator | 2025-09-19 11:38:21.705243 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-19 11:38:21.705249 | orchestrator | Friday 19 September 2025 11:33:21 +0000 (0:00:01.850) 0:01:26.973 ****** 2025-09-19 11:38:21.705256 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.705262 | orchestrator | 2025-09-19 11:38:21.705269 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-19 11:38:21.705276 | orchestrator | Friday 19 September 2025 11:33:22 +0000 (0:00:00.698) 0:01:27.672 ****** 2025-09-19 11:38:21.705283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.705290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.705297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.705307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.705322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.705330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.705337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.705344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.705353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.705360 | orchestrator | 2025-09-19 11:38:21.705372 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-19 11:38:21.705382 | orchestrator | Friday 19 September 2025 11:33:26 +0000 (0:00:04.483) 0:01:32.155 ****** 2025-09-19 11:38:21.705395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.705403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.705411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.705418 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.705444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.705452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.705467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.705521 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.705534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.705542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.705550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.705558 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.705565 | orchestrator | 2025-09-19 11:38:21.705573 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-19 11:38:21.705580 | orchestrator | Friday 19 September 2025 11:33:27 +0000 (0:00:00.613) 0:01:32.769 ****** 2025-09-19 11:38:21.705588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 11:38:21.705597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 11:38:21.705604 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.705612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 11:38:21.705624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 11:38:21.705632 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.705639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 11:38:21.705653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 11:38:21.705660 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.705668 | orchestrator | 2025-09-19 11:38:21.705674 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-19 11:38:21.705681 | orchestrator | Friday 19 September 2025 11:33:28 +0000 (0:00:01.041) 0:01:33.810 ****** 2025-09-19 11:38:21.705687 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.705694 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.705701 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.705707 | orchestrator | 2025-09-19 11:38:21.705714 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-19 11:38:21.705720 | orchestrator | Friday 19 September 2025 11:33:29 +0000 (0:00:01.465) 0:01:35.275 ****** 2025-09-19 11:38:21.705727 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.705734 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.705740 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.705747 | orchestrator | 2025-09-19 11:38:21.705757 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-19 11:38:21.705764 | orchestrator | Friday 19 September 2025 11:33:31 +0000 (0:00:02.048) 0:01:37.324 ****** 2025-09-19 11:38:21.705770 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.705777 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.705783 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.705790 | orchestrator | 2025-09-19 11:38:21.705797 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-19 11:38:21.705803 | orchestrator | Friday 19 September 2025 11:33:32 +0000 (0:00:00.312) 0:01:37.636 ****** 2025-09-19 11:38:21.705810 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.705816 | orchestrator | 2025-09-19 11:38:21.705823 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-19 11:38:21.705830 | orchestrator | Friday 19 September 2025 11:33:33 +0000 (0:00:00.920) 0:01:38.557 ****** 2025-09-19 11:38:21.705837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 11:38:21.705844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 11:38:21.705855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 11:38:21.705862 | orchestrator | 2025-09-19 11:38:21.705869 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-19 11:38:21.705875 | orchestrator | Friday 19 September 2025 11:33:35 +0000 (0:00:02.759) 0:01:41.317 ****** 2025-09-19 11:38:21.705889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 11:38:21.705897 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.705904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 11:38:21.705911 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.705918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 11:38:21.705928 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.705935 | orchestrator | 2025-09-19 11:38:21.705941 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-19 11:38:21.705948 | orchestrator | Friday 19 September 2025 11:33:37 +0000 (0:00:01.600) 0:01:42.917 ****** 2025-09-19 11:38:21.705956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 11:38:21.705964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 11:38:21.705972 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.705978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 11:38:21.705986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 11:38:21.705992 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.706003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 11:38:21.706010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 11:38:21.706066 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.706076 | orchestrator | 2025-09-19 11:38:21.706083 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-19 11:38:21.706089 | orchestrator | Friday 19 September 2025 11:33:39 +0000 (0:00:01.690) 0:01:44.608 ****** 2025-09-19 11:38:21.706096 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.706103 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.706109 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.706116 | orchestrator | 2025-09-19 11:38:21.706123 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-19 11:38:21.706134 | orchestrator | Friday 19 September 2025 11:33:39 +0000 (0:00:00.677) 0:01:45.286 ****** 2025-09-19 11:38:21.706140 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.706147 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.706154 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.706160 | orchestrator | 2025-09-19 11:38:21.706167 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-19 11:38:21.706174 | orchestrator | Friday 19 September 2025 11:33:40 +0000 (0:00:01.225) 0:01:46.511 ****** 2025-09-19 11:38:21.706180 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.706186 | orchestrator | 2025-09-19 11:38:21.706193 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-19 11:38:21.706200 | orchestrator | Friday 19 September 2025 11:33:41 +0000 (0:00:00.721) 0:01:47.233 ****** 2025-09-19 11:38:21.706221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.706229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.706272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.706299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706331 | orchestrator | 2025-09-19 11:38:21.706338 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-19 11:38:21.706345 | orchestrator | Friday 19 September 2025 11:33:45 +0000 (0:00:03.462) 0:01:50.695 ****** 2025-09-19 11:38:21.706355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.706362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706394 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.706401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.706408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706453 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.706460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.706467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706490 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.706497 | orchestrator | 2025-09-19 11:38:21.706504 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-19 11:38:21.706510 | orchestrator | Friday 19 September 2025 11:33:46 +0000 (0:00:00.960) 0:01:51.656 ****** 2025-09-19 11:38:21.706524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 11:38:21.706535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 11:38:21.706542 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.706548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 11:38:21.706555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 11:38:21.706562 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.706569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 11:38:21.706575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 11:38:21.706582 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.706589 | orchestrator | 2025-09-19 11:38:21.706595 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-19 11:38:21.706602 | orchestrator | Friday 19 September 2025 11:33:47 +0000 (0:00:00.973) 0:01:52.630 ****** 2025-09-19 11:38:21.706609 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.706615 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.706622 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.706629 | orchestrator | 2025-09-19 11:38:21.706635 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-19 11:38:21.706642 | orchestrator | Friday 19 September 2025 11:33:48 +0000 (0:00:01.325) 0:01:53.955 ****** 2025-09-19 11:38:21.706648 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.706655 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.706661 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.706668 | orchestrator | 2025-09-19 11:38:21.706674 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-19 11:38:21.706681 | orchestrator | Friday 19 September 2025 11:33:50 +0000 (0:00:02.231) 0:01:56.187 ****** 2025-09-19 11:38:21.706688 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.706694 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.706701 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.706707 | orchestrator | 2025-09-19 11:38:21.706714 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-19 11:38:21.706721 | orchestrator | Friday 19 September 2025 11:33:51 +0000 (0:00:00.533) 0:01:56.721 ****** 2025-09-19 11:38:21.706727 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.706734 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.706740 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.706747 | orchestrator | 2025-09-19 11:38:21.706753 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-19 11:38:21.706760 | orchestrator | Friday 19 September 2025 11:33:51 +0000 (0:00:00.361) 0:01:57.083 ****** 2025-09-19 11:38:21.706781 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.706788 | orchestrator | 2025-09-19 11:38:21.706794 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-19 11:38:21.706801 | orchestrator | Friday 19 September 2025 11:33:52 +0000 (0:00:00.781) 0:01:57.865 ****** 2025-09-19 11:38:21.706815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:38:21.706826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:38:21.706834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:38:21.706876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:38:21.706898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:38:21.706941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:38:21.706962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.706990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707017 | orchestrator | 2025-09-19 11:38:21.707024 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-19 11:38:21.707030 | orchestrator | Friday 19 September 2025 11:33:56 +0000 (0:00:03.789) 0:02:01.654 ****** 2025-09-19 11:38:21.707042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:38:21.707049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:38:21.707056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707101 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.707108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:38:21.707115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:38:21.707126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707167 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.707174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:38:21.707186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:38:21.707193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.707235 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.707241 | orchestrator | 2025-09-19 11:38:21.707248 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-19 11:38:21.707311 | orchestrator | Friday 19 September 2025 11:33:56 +0000 (0:00:00.837) 0:02:02.492 ****** 2025-09-19 11:38:21.707319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 11:38:21.707326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 11:38:21.707334 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.707341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 11:38:21.707348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 11:38:21.707354 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.707361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 11:38:21.707368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 11:38:21.707374 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.707381 | orchestrator | 2025-09-19 11:38:21.707388 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-19 11:38:21.707394 | orchestrator | Friday 19 September 2025 11:33:57 +0000 (0:00:01.024) 0:02:03.516 ****** 2025-09-19 11:38:21.707401 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.707407 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.707414 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.707459 | orchestrator | 2025-09-19 11:38:21.707467 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-19 11:38:21.707474 | orchestrator | Friday 19 September 2025 11:33:59 +0000 (0:00:01.779) 0:02:05.296 ****** 2025-09-19 11:38:21.707480 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.707487 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.707493 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.707500 | orchestrator | 2025-09-19 11:38:21.707506 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-19 11:38:21.707513 | orchestrator | Friday 19 September 2025 11:34:01 +0000 (0:00:01.795) 0:02:07.092 ****** 2025-09-19 11:38:21.707519 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.707526 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.707533 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.707539 | orchestrator | 2025-09-19 11:38:21.707549 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-19 11:38:21.707556 | orchestrator | Friday 19 September 2025 11:34:02 +0000 (0:00:00.505) 0:02:07.597 ****** 2025-09-19 11:38:21.707563 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.707569 | orchestrator | 2025-09-19 11:38:21.707576 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-19 11:38:21.707582 | orchestrator | Friday 19 September 2025 11:34:02 +0000 (0:00:00.795) 0:02:08.393 ****** 2025-09-19 11:38:21.707597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:38:21.707612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 11:38:21.707627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:38:21.707641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 11:38:21.707657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:38:21.707669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 11:38:21.707677 | orchestrator | 2025-09-19 11:38:21.707684 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-19 11:38:21.707690 | orchestrator | Friday 19 September 2025 11:34:07 +0000 (0:00:04.310) 0:02:12.703 ****** 2025-09-19 11:38:21.707704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:38:21.707716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 11:38:21.707724 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.707735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:38:21.707746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kol2025-09-19 11:38:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:21.708102 | orchestrator | la_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 11:38:21.708113 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.708120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:38:21.708136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 11:38:21.708148 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.708155 | orchestrator | 2025-09-19 11:38:21.708161 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-19 11:38:21.708167 | orchestrator | Friday 19 September 2025 11:34:10 +0000 (0:00:03.029) 0:02:15.732 ****** 2025-09-19 11:38:21.708174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 11:38:21.708180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 11:38:21.708187 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.708193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 11:38:21.708203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 11:38:21.708214 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.708221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 11:38:21.708232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 11:38:21.708238 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.708244 | orchestrator | 2025-09-19 11:38:21.708251 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-19 11:38:21.708257 | orchestrator | Friday 19 September 2025 11:34:13 +0000 (0:00:03.127) 0:02:18.860 ****** 2025-09-19 11:38:21.708263 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.708269 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.708275 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.708281 | orchestrator | 2025-09-19 11:38:21.708287 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-19 11:38:21.708293 | orchestrator | Friday 19 September 2025 11:34:14 +0000 (0:00:01.319) 0:02:20.180 ****** 2025-09-19 11:38:21.708299 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.708305 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.708311 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.708317 | orchestrator | 2025-09-19 11:38:21.708324 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-19 11:38:21.708330 | orchestrator | Friday 19 September 2025 11:34:16 +0000 (0:00:02.125) 0:02:22.305 ****** 2025-09-19 11:38:21.708336 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.708342 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.708348 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.708354 | orchestrator | 2025-09-19 11:38:21.708360 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-19 11:38:21.708366 | orchestrator | Friday 19 September 2025 11:34:17 +0000 (0:00:00.562) 0:02:22.868 ****** 2025-09-19 11:38:21.708372 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.708378 | orchestrator | 2025-09-19 11:38:21.708384 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-19 11:38:21.708390 | orchestrator | Friday 19 September 2025 11:34:18 +0000 (0:00:00.894) 0:02:23.762 ****** 2025-09-19 11:38:21.708396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:38:21.708403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:38:21.708436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:38:21.708444 | orchestrator | 2025-09-19 11:38:21.708450 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-19 11:38:21.708456 | orchestrator | Friday 19 September 2025 11:34:21 +0000 (0:00:03.426) 0:02:27.189 ****** 2025-09-19 11:38:21.708468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:38:21.708474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:38:21.708481 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.708487 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.708493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:38:21.708500 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.708506 | orchestrator | 2025-09-19 11:38:21.708512 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-19 11:38:21.708518 | orchestrator | Friday 19 September 2025 11:34:22 +0000 (0:00:00.704) 0:02:27.893 ****** 2025-09-19 11:38:21.708524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 11:38:21.708535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 11:38:21.708541 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.708547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 11:38:21.708553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 11:38:21.708559 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.708565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 11:38:21.708571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 11:38:21.708578 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.708584 | orchestrator | 2025-09-19 11:38:21.708593 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-19 11:38:21.708599 | orchestrator | Friday 19 September 2025 11:34:23 +0000 (0:00:00.654) 0:02:28.548 ****** 2025-09-19 11:38:21.708605 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.708611 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.708617 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.708623 | orchestrator | 2025-09-19 11:38:21.708629 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-19 11:38:21.708636 | orchestrator | Friday 19 September 2025 11:34:24 +0000 (0:00:01.306) 0:02:29.854 ****** 2025-09-19 11:38:21.708642 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.708648 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.708654 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.708660 | orchestrator | 2025-09-19 11:38:21.708666 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-19 11:38:21.708672 | orchestrator | Friday 19 September 2025 11:34:26 +0000 (0:00:02.082) 0:02:31.937 ****** 2025-09-19 11:38:21.708678 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.708684 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.708694 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.708700 | orchestrator | 2025-09-19 11:38:21.708706 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-19 11:38:21.708712 | orchestrator | Friday 19 September 2025 11:34:26 +0000 (0:00:00.415) 0:02:32.352 ****** 2025-09-19 11:38:21.708718 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.708724 | orchestrator | 2025-09-19 11:38:21.708730 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-19 11:38:21.708736 | orchestrator | Friday 19 September 2025 11:34:27 +0000 (0:00:01.015) 0:02:33.368 ****** 2025-09-19 11:38:21.708744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:38:21.708759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:38:21.708777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:38:21.708788 | orchestrator | 2025-09-19 11:38:21.708794 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-19 11:38:21.708800 | orchestrator | Friday 19 September 2025 11:34:33 +0000 (0:00:05.521) 0:02:38.890 ****** 2025-09-19 11:38:21.708815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:38:21.708828 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.708835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:38:21.708842 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.708855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:38:21.708866 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.708873 | orchestrator | 2025-09-19 11:38:21.708879 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-19 11:38:21.708885 | orchestrator | Friday 19 September 2025 11:34:35 +0000 (0:00:01.982) 0:02:40.872 ****** 2025-09-19 11:38:21.708891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 11:38:21.708898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 11:38:21.708905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 11:38:21.708912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 11:38:21.708918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 11:38:21.708925 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.708934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 11:38:21.708941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 11:38:21.708947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 11:38:21.708958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 11:38:21.708969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 11:38:21.708975 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.708981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 11:38:21.708987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 11:38:21.708994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 11:38:21.709000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 11:38:21.709006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 11:38:21.709013 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.709019 | orchestrator | 2025-09-19 11:38:21.709025 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-19 11:38:21.709031 | orchestrator | Friday 19 September 2025 11:34:36 +0000 (0:00:01.232) 0:02:42.105 ****** 2025-09-19 11:38:21.709038 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.709044 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.709050 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.709056 | orchestrator | 2025-09-19 11:38:21.709062 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-19 11:38:21.709068 | orchestrator | Friday 19 September 2025 11:34:37 +0000 (0:00:01.340) 0:02:43.445 ****** 2025-09-19 11:38:21.709074 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.709080 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.709086 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.709092 | orchestrator | 2025-09-19 11:38:21.709099 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-19 11:38:21.709105 | orchestrator | Friday 19 September 2025 11:34:40 +0000 (0:00:02.314) 0:02:45.760 ****** 2025-09-19 11:38:21.709111 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.709117 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.709123 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.709129 | orchestrator | 2025-09-19 11:38:21.709135 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-19 11:38:21.709141 | orchestrator | Friday 19 September 2025 11:34:40 +0000 (0:00:00.304) 0:02:46.064 ****** 2025-09-19 11:38:21.709147 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.709153 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.709159 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.709165 | orchestrator | 2025-09-19 11:38:21.709171 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-19 11:38:21.709178 | orchestrator | Friday 19 September 2025 11:34:41 +0000 (0:00:00.542) 0:02:46.607 ****** 2025-09-19 11:38:21.709186 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.709197 | orchestrator | 2025-09-19 11:38:21.709203 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-19 11:38:21.709209 | orchestrator | Friday 19 September 2025 11:34:42 +0000 (0:00:00.943) 0:02:47.551 ****** 2025-09-19 11:38:21.709219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:38:21.709227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:38:21.709234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:38:21.709241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:38:21.709248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:38:21.709263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:38:21.709274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:38:21.709281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:38:21.709288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:38:21.709294 | orchestrator | 2025-09-19 11:38:21.709300 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-19 11:38:21.709306 | orchestrator | Friday 19 September 2025 11:34:45 +0000 (0:00:03.715) 0:02:51.267 ****** 2025-09-19 11:38:21.709313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:38:21.709327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:38:21.709338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:38:21.709345 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.709352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:38:21.709358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:38:21.709365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:38:21.709371 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.709384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:38:21.709394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:38:21.709401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:38:21.709408 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.709414 | orchestrator | 2025-09-19 11:38:21.709438 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-19 11:38:21.709444 | orchestrator | Friday 19 September 2025 11:34:46 +0000 (0:00:01.022) 0:02:52.289 ****** 2025-09-19 11:38:21.709451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 11:38:21.709458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 11:38:21.709464 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.709470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 11:38:21.709477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 11:38:21.709483 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.709489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 11:38:21.709500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 11:38:21.709506 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.709512 | orchestrator | 2025-09-19 11:38:21.709519 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-19 11:38:21.709525 | orchestrator | Friday 19 September 2025 11:34:47 +0000 (0:00:00.866) 0:02:53.156 ****** 2025-09-19 11:38:21.709531 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.709537 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.709543 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.709549 | orchestrator | 2025-09-19 11:38:21.709555 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-19 11:38:21.709561 | orchestrator | Friday 19 September 2025 11:34:48 +0000 (0:00:01.323) 0:02:54.479 ****** 2025-09-19 11:38:21.709567 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.709573 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.709579 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.709586 | orchestrator | 2025-09-19 11:38:21.709594 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-19 11:38:21.709601 | orchestrator | Friday 19 September 2025 11:34:51 +0000 (0:00:02.117) 0:02:56.597 ****** 2025-09-19 11:38:21.709607 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.709613 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.709619 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.709625 | orchestrator | 2025-09-19 11:38:21.709632 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-19 11:38:21.709638 | orchestrator | Friday 19 September 2025 11:34:51 +0000 (0:00:00.536) 0:02:57.133 ****** 2025-09-19 11:38:21.709644 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.709650 | orchestrator | 2025-09-19 11:38:21.709656 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-19 11:38:21.709662 | orchestrator | Friday 19 September 2025 11:34:52 +0000 (0:00:01.005) 0:02:58.139 ****** 2025-09-19 11:38:21.709673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:38:21.709680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.709690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:38:21.709697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.709706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:38:21.709716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.709723 | orchestrator | 2025-09-19 11:38:21.709729 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-19 11:38:21.709735 | orchestrator | Friday 19 September 2025 11:34:56 +0000 (0:00:03.997) 0:03:02.136 ****** 2025-09-19 11:38:21.709742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:38:21.709754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.709760 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.709769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:38:21.709786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.709793 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.709799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:38:21.709809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.709816 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.709822 | orchestrator | 2025-09-19 11:38:21.709828 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-19 11:38:21.709834 | orchestrator | Friday 19 September 2025 11:34:57 +0000 (0:00:00.962) 0:03:03.098 ****** 2025-09-19 11:38:21.709840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 11:38:21.709847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 11:38:21.709853 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.709859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 11:38:21.709866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 11:38:21.709872 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.709878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 11:38:21.709884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 11:38:21.709891 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.709897 | orchestrator | 2025-09-19 11:38:21.709905 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-19 11:38:21.709912 | orchestrator | Friday 19 September 2025 11:34:59 +0000 (0:00:01.481) 0:03:04.580 ****** 2025-09-19 11:38:21.709918 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.709924 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.709930 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.709936 | orchestrator | 2025-09-19 11:38:21.709942 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-19 11:38:21.709948 | orchestrator | Friday 19 September 2025 11:35:00 +0000 (0:00:01.312) 0:03:05.893 ****** 2025-09-19 11:38:21.709955 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.709961 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.709967 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.709973 | orchestrator | 2025-09-19 11:38:21.709979 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-19 11:38:21.709985 | orchestrator | Friday 19 September 2025 11:35:02 +0000 (0:00:02.196) 0:03:08.089 ****** 2025-09-19 11:38:21.710008 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.710039 | orchestrator | 2025-09-19 11:38:21.710047 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-19 11:38:21.710054 | orchestrator | Friday 19 September 2025 11:35:03 +0000 (0:00:01.339) 0:03:09.429 ****** 2025-09-19 11:38:21.710064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 11:38:21.710070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 11:38:21.710106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 11:38:21.710130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710166 | orchestrator | 2025-09-19 11:38:21.710172 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-19 11:38:21.710178 | orchestrator | Friday 19 September 2025 11:35:07 +0000 (0:00:03.879) 0:03:13.308 ****** 2025-09-19 11:38:21.710185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 11:38:21.710191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710210 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.710220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 11:38:21.710233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710253 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.710260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 11:38:21.710266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.710297 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.710303 | orchestrator | 2025-09-19 11:38:21.710309 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-19 11:38:21.710315 | orchestrator | Friday 19 September 2025 11:35:08 +0000 (0:00:00.596) 0:03:13.905 ****** 2025-09-19 11:38:21.710322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 11:38:21.710351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 11:38:21.710358 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.710364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 11:38:21.710370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 11:38:21.710376 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.710383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 11:38:21.710389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 11:38:21.710395 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.710401 | orchestrator | 2025-09-19 11:38:21.710407 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-19 11:38:21.710413 | orchestrator | Friday 19 September 2025 11:35:09 +0000 (0:00:00.976) 0:03:14.881 ****** 2025-09-19 11:38:21.710455 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.710463 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.710469 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.710476 | orchestrator | 2025-09-19 11:38:21.710482 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-19 11:38:21.710488 | orchestrator | Friday 19 September 2025 11:35:10 +0000 (0:00:01.297) 0:03:16.178 ****** 2025-09-19 11:38:21.710494 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.710500 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.710506 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.710512 | orchestrator | 2025-09-19 11:38:21.710519 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-19 11:38:21.710525 | orchestrator | Friday 19 September 2025 11:35:12 +0000 (0:00:02.124) 0:03:18.302 ****** 2025-09-19 11:38:21.710531 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.710537 | orchestrator | 2025-09-19 11:38:21.710543 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-19 11:38:21.710554 | orchestrator | Friday 19 September 2025 11:35:14 +0000 (0:00:01.269) 0:03:19.572 ****** 2025-09-19 11:38:21.710560 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 11:38:21.710566 | orchestrator | 2025-09-19 11:38:21.710572 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-19 11:38:21.710579 | orchestrator | Friday 19 September 2025 11:35:16 +0000 (0:00:02.819) 0:03:22.391 ****** 2025-09-19 11:38:21.710603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:38:21.710611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 11:38:21.710618 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.710625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:38:21.710640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 11:38:21.710647 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.710666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:38:21.710674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 11:38:21.710680 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.710686 | orchestrator | 2025-09-19 11:38:21.710693 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-19 11:38:21.710703 | orchestrator | Friday 19 September 2025 11:35:19 +0000 (0:00:02.238) 0:03:24.630 ****** 2025-09-19 11:38:21.710713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:38:21.710732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 11:38:21.710739 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.710746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:38:21.710758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 11:38:21.710764 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.710780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:38:21.710788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 11:38:21.710794 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.710801 | orchestrator | 2025-09-19 11:38:21.710807 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-19 11:38:21.710813 | orchestrator | Friday 19 September 2025 11:35:21 +0000 (0:00:02.335) 0:03:26.965 ****** 2025-09-19 11:38:21.710819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 11:38:21.710830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 11:38:21.710836 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.710846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 11:38:21.710852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 11:38:21.710859 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.710879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 11:38:21.710886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 11:38:21.710892 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.710898 | orchestrator | 2025-09-19 11:38:21.710905 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-19 11:38:21.710911 | orchestrator | Friday 19 September 2025 11:35:24 +0000 (0:00:02.848) 0:03:29.814 ****** 2025-09-19 11:38:21.710921 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.710927 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.710933 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.710939 | orchestrator | 2025-09-19 11:38:21.710944 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-19 11:38:21.710950 | orchestrator | Friday 19 September 2025 11:35:26 +0000 (0:00:01.946) 0:03:31.760 ****** 2025-09-19 11:38:21.710955 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.710960 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.710966 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.710971 | orchestrator | 2025-09-19 11:38:21.710977 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-19 11:38:21.710982 | orchestrator | Friday 19 September 2025 11:35:27 +0000 (0:00:01.405) 0:03:33.165 ****** 2025-09-19 11:38:21.710987 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.710993 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.710998 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.711003 | orchestrator | 2025-09-19 11:38:21.711009 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-19 11:38:21.711014 | orchestrator | Friday 19 September 2025 11:35:27 +0000 (0:00:00.318) 0:03:33.483 ****** 2025-09-19 11:38:21.711019 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.711025 | orchestrator | 2025-09-19 11:38:21.711030 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-19 11:38:21.711035 | orchestrator | Friday 19 September 2025 11:35:29 +0000 (0:00:01.330) 0:03:34.813 ****** 2025-09-19 11:38:21.711041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 11:38:21.711050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 11:38:21.711066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 11:38:21.711072 | orchestrator | 2025-09-19 11:38:21.711088 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-19 11:38:21.711098 | orchestrator | Friday 19 September 2025 11:35:30 +0000 (0:00:01.588) 0:03:36.402 ****** 2025-09-19 11:38:21.711103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 11:38:21.711109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 11:38:21.711115 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.711120 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.711126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 11:38:21.711131 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.711137 | orchestrator | 2025-09-19 11:38:21.711142 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-19 11:38:21.711148 | orchestrator | Friday 19 September 2025 11:35:31 +0000 (0:00:00.396) 0:03:36.798 ****** 2025-09-19 11:38:21.711156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 11:38:21.711162 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.711168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 11:38:21.711173 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.711189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 11:38:21.711196 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.711205 | orchestrator | 2025-09-19 11:38:21.711211 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-19 11:38:21.711216 | orchestrator | Friday 19 September 2025 11:35:32 +0000 (0:00:00.916) 0:03:37.715 ****** 2025-09-19 11:38:21.711221 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.711227 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.711232 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.711237 | orchestrator | 2025-09-19 11:38:21.711243 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-19 11:38:21.711248 | orchestrator | Friday 19 September 2025 11:35:32 +0000 (0:00:00.449) 0:03:38.164 ****** 2025-09-19 11:38:21.711253 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.711259 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.711264 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.711269 | orchestrator | 2025-09-19 11:38:21.711275 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-19 11:38:21.711280 | orchestrator | Friday 19 September 2025 11:35:33 +0000 (0:00:01.287) 0:03:39.452 ****** 2025-09-19 11:38:21.711285 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.711291 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.711296 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.711301 | orchestrator | 2025-09-19 11:38:21.711307 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-19 11:38:21.711312 | orchestrator | Friday 19 September 2025 11:35:34 +0000 (0:00:00.342) 0:03:39.795 ****** 2025-09-19 11:38:21.711317 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.711322 | orchestrator | 2025-09-19 11:38:21.711328 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-19 11:38:21.711333 | orchestrator | Friday 19 September 2025 11:35:35 +0000 (0:00:01.417) 0:03:41.212 ****** 2025-09-19 11:38:21.711339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:38:21.711345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:38:21.711382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 11:38:21.711434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 11:38:21.711452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.711458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.711473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.711488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.711511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:38:21.711517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 11:38:21.711543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:38:21.711558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.711564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 11:38:21.711601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 11:38:21.711615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.711630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:38:21.711636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 11:38:21.711648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:38:21.711660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:38:21.711669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 11:38:21.711696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.711720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.711735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:38:21.711747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 11:38:21.711762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.711771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 11:38:21.711793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:38:21.711798 | orchestrator | 2025-09-19 11:38:21.711804 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-19 11:38:21.711809 | orchestrator | Friday 19 September 2025 11:35:39 +0000 (0:00:04.303) 0:03:45.516 ****** 2025-09-19 11:38:21.711815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:38:21.711824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 11:38:21.711881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.711896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:38:21.711924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.711941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:38:21.711967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.711993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 11:38:21.711999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 11:38:21.712004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.712015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.712064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:38:21.712071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.712077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 11:38:21.712086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:38:21.712106 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.712115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:38:21.712126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 11:38:21.712149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 11:38:21.712166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.712178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.712188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.712200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 11:38:21.712209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:38:21.712242 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.712248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:38:21.712258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 11:38:21.712269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:38:21.712278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 11:38:21.712303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:38:21.712316 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.712322 | orchestrator | 2025-09-19 11:38:21.712327 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-19 11:38:21.712332 | orchestrator | Friday 19 September 2025 11:35:41 +0000 (0:00:01.515) 0:03:47.032 ****** 2025-09-19 11:38:21.712338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 11:38:21.712343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 11:38:21.712349 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.712354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 11:38:21.712360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 11:38:21.712365 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.712371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 11:38:21.712376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 11:38:21.712381 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.712387 | orchestrator | 2025-09-19 11:38:21.712392 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-19 11:38:21.712397 | orchestrator | Friday 19 September 2025 11:35:43 +0000 (0:00:02.022) 0:03:49.054 ****** 2025-09-19 11:38:21.712403 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.712408 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.712414 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.712430 | orchestrator | 2025-09-19 11:38:21.712436 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-19 11:38:21.712441 | orchestrator | Friday 19 September 2025 11:35:44 +0000 (0:00:01.286) 0:03:50.341 ****** 2025-09-19 11:38:21.712447 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.712452 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.712457 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.712463 | orchestrator | 2025-09-19 11:38:21.712490 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-19 11:38:21.712499 | orchestrator | Friday 19 September 2025 11:35:46 +0000 (0:00:02.078) 0:03:52.419 ****** 2025-09-19 11:38:21.712505 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.712510 | orchestrator | 2025-09-19 11:38:21.712515 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-19 11:38:21.712521 | orchestrator | Friday 19 September 2025 11:35:48 +0000 (0:00:01.204) 0:03:53.624 ****** 2025-09-19 11:38:21.712538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.712548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.712554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.712559 | orchestrator | 2025-09-19 11:38:21.712564 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-19 11:38:21.712570 | orchestrator | Friday 19 September 2025 11:35:51 +0000 (0:00:03.618) 0:03:57.242 ****** 2025-09-19 11:38:21.712575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.712584 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.712600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.712609 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.712615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.712621 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.712626 | orchestrator | 2025-09-19 11:38:21.712631 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-19 11:38:21.712637 | orchestrator | Friday 19 September 2025 11:35:52 +0000 (0:00:00.528) 0:03:57.771 ****** 2025-09-19 11:38:21.712642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 11:38:21.712648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 11:38:21.712653 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.712659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 11:38:21.712664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 11:38:21.712670 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.712675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 11:38:21.712680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 11:38:21.712686 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.712691 | orchestrator | 2025-09-19 11:38:21.712696 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-19 11:38:21.712702 | orchestrator | Friday 19 September 2025 11:35:53 +0000 (0:00:00.750) 0:03:58.521 ****** 2025-09-19 11:38:21.712713 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.712718 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.712724 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.712729 | orchestrator | 2025-09-19 11:38:21.712734 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-19 11:38:21.712740 | orchestrator | Friday 19 September 2025 11:35:54 +0000 (0:00:01.874) 0:04:00.396 ****** 2025-09-19 11:38:21.712745 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.712750 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.712755 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.712761 | orchestrator | 2025-09-19 11:38:21.712769 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-19 11:38:21.712775 | orchestrator | Friday 19 September 2025 11:35:56 +0000 (0:00:01.855) 0:04:02.252 ****** 2025-09-19 11:38:21.712780 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.712785 | orchestrator | 2025-09-19 11:38:21.712791 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-19 11:38:21.712796 | orchestrator | Friday 19 September 2025 11:35:58 +0000 (0:00:01.483) 0:04:03.735 ****** 2025-09-19 11:38:21.712813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.712819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.712843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.712871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712885 | orchestrator | 2025-09-19 11:38:21.712891 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-19 11:38:21.712896 | orchestrator | Friday 19 September 2025 11:36:02 +0000 (0:00:04.187) 0:04:07.923 ****** 2025-09-19 11:38:21.712914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.712921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712932 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.712938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.712948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712962 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.712978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.712984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.712999 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.713004 | orchestrator | 2025-09-19 11:38:21.713010 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-19 11:38:21.713015 | orchestrator | Friday 19 September 2025 11:36:03 +0000 (0:00:01.312) 0:04:09.235 ****** 2025-09-19 11:38:21.713021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 11:38:21.713026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 11:38:21.713032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 11:38:21.713037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 11:38:21.713043 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.713051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 11:38:21.713057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 11:38:21.713062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 11:38:21.713068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 11:38:21.713083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 11:38:21.713089 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.713095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 11:38:21.713100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 11:38:21.713106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 11:38:21.713111 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.713117 | orchestrator | 2025-09-19 11:38:21.713122 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-19 11:38:21.713127 | orchestrator | Friday 19 September 2025 11:36:04 +0000 (0:00:01.061) 0:04:10.296 ****** 2025-09-19 11:38:21.713136 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.713141 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.713147 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.713152 | orchestrator | 2025-09-19 11:38:21.713157 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-19 11:38:21.713163 | orchestrator | Friday 19 September 2025 11:36:06 +0000 (0:00:01.335) 0:04:11.632 ****** 2025-09-19 11:38:21.713168 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.713173 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.713178 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.713184 | orchestrator | 2025-09-19 11:38:21.713189 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-19 11:38:21.713194 | orchestrator | Friday 19 September 2025 11:36:08 +0000 (0:00:02.014) 0:04:13.647 ****** 2025-09-19 11:38:21.713199 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.713205 | orchestrator | 2025-09-19 11:38:21.713210 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-19 11:38:21.713215 | orchestrator | Friday 19 September 2025 11:36:09 +0000 (0:00:01.564) 0:04:15.211 ****** 2025-09-19 11:38:21.713221 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-19 11:38:21.713226 | orchestrator | 2025-09-19 11:38:21.713231 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-19 11:38:21.713236 | orchestrator | Friday 19 September 2025 11:36:10 +0000 (0:00:00.815) 0:04:16.027 ****** 2025-09-19 11:38:21.713242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 11:38:21.713248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 11:38:21.713256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 11:38:21.713262 | orchestrator | 2025-09-19 11:38:21.713267 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-19 11:38:21.713273 | orchestrator | Friday 19 September 2025 11:36:15 +0000 (0:00:04.562) 0:04:20.589 ****** 2025-09-19 11:38:21.713289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:38:21.713298 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.713304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:38:21.713309 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.713315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:38:21.713321 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.713326 | orchestrator | 2025-09-19 11:38:21.713331 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-19 11:38:21.713337 | orchestrator | Friday 19 September 2025 11:36:16 +0000 (0:00:01.098) 0:04:21.688 ****** 2025-09-19 11:38:21.713342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 11:38:21.713348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 11:38:21.713353 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.713359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 11:38:21.713365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 11:38:21.713370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 11:38:21.713376 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.713381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 11:38:21.713387 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.713392 | orchestrator | 2025-09-19 11:38:21.713397 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 11:38:21.713405 | orchestrator | Friday 19 September 2025 11:36:17 +0000 (0:00:01.567) 0:04:23.255 ****** 2025-09-19 11:38:21.713411 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.713416 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.713432 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.713438 | orchestrator | 2025-09-19 11:38:21.713443 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 11:38:21.713449 | orchestrator | Friday 19 September 2025 11:36:20 +0000 (0:00:02.548) 0:04:25.804 ****** 2025-09-19 11:38:21.713457 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.713463 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.713468 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.713473 | orchestrator | 2025-09-19 11:38:21.713479 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-19 11:38:21.713484 | orchestrator | Friday 19 September 2025 11:36:23 +0000 (0:00:03.112) 0:04:28.917 ****** 2025-09-19 11:38:21.713500 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-19 11:38:21.713506 | orchestrator | 2025-09-19 11:38:21.713511 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-19 11:38:21.713516 | orchestrator | Friday 19 September 2025 11:36:24 +0000 (0:00:01.442) 0:04:30.360 ****** 2025-09-19 11:38:21.713522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:38:21.713528 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.713533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:38:21.713539 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.713544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:38:21.713550 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.713555 | orchestrator | 2025-09-19 11:38:21.713561 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-19 11:38:21.713566 | orchestrator | Friday 19 September 2025 11:36:26 +0000 (0:00:01.343) 0:04:31.703 ****** 2025-09-19 11:38:21.713572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:38:21.713577 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.713583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:38:21.713593 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.713602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:38:21.713607 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.713613 | orchestrator | 2025-09-19 11:38:21.713618 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-19 11:38:21.713623 | orchestrator | Friday 19 September 2025 11:36:27 +0000 (0:00:01.275) 0:04:32.979 ****** 2025-09-19 11:38:21.713629 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.713634 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.713639 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.713645 | orchestrator | 2025-09-19 11:38:21.713659 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 11:38:21.713665 | orchestrator | Friday 19 September 2025 11:36:29 +0000 (0:00:01.792) 0:04:34.771 ****** 2025-09-19 11:38:21.713671 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:38:21.713676 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:38:21.713681 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:38:21.713687 | orchestrator | 2025-09-19 11:38:21.713692 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 11:38:21.713697 | orchestrator | Friday 19 September 2025 11:36:31 +0000 (0:00:02.468) 0:04:37.240 ****** 2025-09-19 11:38:21.713702 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:38:21.713708 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:38:21.713713 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:38:21.713718 | orchestrator | 2025-09-19 11:38:21.713724 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-19 11:38:21.713729 | orchestrator | Friday 19 September 2025 11:36:34 +0000 (0:00:03.040) 0:04:40.280 ****** 2025-09-19 11:38:21.713734 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-19 11:38:21.713740 | orchestrator | 2025-09-19 11:38:21.713745 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-19 11:38:21.713750 | orchestrator | Friday 19 September 2025 11:36:35 +0000 (0:00:00.845) 0:04:41.126 ****** 2025-09-19 11:38:21.713756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 11:38:21.713761 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.713767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 11:38:21.713776 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.713781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 11:38:21.713787 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.713792 | orchestrator | 2025-09-19 11:38:21.713798 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-19 11:38:21.713803 | orchestrator | Friday 19 September 2025 11:36:36 +0000 (0:00:01.295) 0:04:42.422 ****** 2025-09-19 11:38:21.713811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 11:38:21.713817 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.713823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 11:38:21.713828 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.713843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 11:38:21.713849 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.713855 | orchestrator | 2025-09-19 11:38:21.713860 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-19 11:38:21.713866 | orchestrator | Friday 19 September 2025 11:36:38 +0000 (0:00:01.351) 0:04:43.774 ****** 2025-09-19 11:38:21.713871 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.713876 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.713881 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.713887 | orchestrator | 2025-09-19 11:38:21.713892 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 11:38:21.713897 | orchestrator | Friday 19 September 2025 11:36:39 +0000 (0:00:01.535) 0:04:45.309 ****** 2025-09-19 11:38:21.713903 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:38:21.713908 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:38:21.713913 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:38:21.713919 | orchestrator | 2025-09-19 11:38:21.713924 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 11:38:21.713929 | orchestrator | Friday 19 September 2025 11:36:42 +0000 (0:00:02.559) 0:04:47.869 ****** 2025-09-19 11:38:21.713935 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:38:21.713940 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:38:21.713949 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:38:21.713955 | orchestrator | 2025-09-19 11:38:21.713960 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-19 11:38:21.713965 | orchestrator | Friday 19 September 2025 11:36:45 +0000 (0:00:03.312) 0:04:51.182 ****** 2025-09-19 11:38:21.713971 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.713976 | orchestrator | 2025-09-19 11:38:21.713981 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-19 11:38:21.713987 | orchestrator | Friday 19 September 2025 11:36:47 +0000 (0:00:01.560) 0:04:52.743 ****** 2025-09-19 11:38:21.713992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.713998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:38:21.714006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.714055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.714063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.714074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.714080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:38:21.714085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.714094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.714110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.714116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.714126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:38:21.714132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.714138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.714143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.714149 | orchestrator | 2025-09-19 11:38:21.714154 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-19 11:38:21.714160 | orchestrator | Friday 19 September 2025 11:36:50 +0000 (0:00:03.324) 0:04:56.068 ****** 2025-09-19 11:38:21.714174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.714180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:38:21.714190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.714196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.714242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.714255 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.714264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.714281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:38:21.714287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.714297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.714303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.714308 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.714314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.714320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:38:21.714328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.714344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:38:21.714354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:38:21.714360 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.714365 | orchestrator | 2025-09-19 11:38:21.714371 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-19 11:38:21.714376 | orchestrator | Friday 19 September 2025 11:36:51 +0000 (0:00:00.634) 0:04:56.702 ****** 2025-09-19 11:38:21.714382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 11:38:21.714387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 11:38:21.714393 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.714398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 11:38:21.714404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 11:38:21.714409 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.714415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 11:38:21.714454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 11:38:21.714460 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.714465 | orchestrator | 2025-09-19 11:38:21.714471 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-19 11:38:21.714476 | orchestrator | Friday 19 September 2025 11:36:52 +0000 (0:00:01.292) 0:04:57.994 ****** 2025-09-19 11:38:21.714482 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.714487 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.714492 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.714498 | orchestrator | 2025-09-19 11:38:21.714503 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-19 11:38:21.714508 | orchestrator | Friday 19 September 2025 11:36:53 +0000 (0:00:01.425) 0:04:59.420 ****** 2025-09-19 11:38:21.714514 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.714519 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.714525 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.714530 | orchestrator | 2025-09-19 11:38:21.714535 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-19 11:38:21.714544 | orchestrator | Friday 19 September 2025 11:36:56 +0000 (0:00:02.237) 0:05:01.658 ****** 2025-09-19 11:38:21.714553 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.714558 | orchestrator | 2025-09-19 11:38:21.714564 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-19 11:38:21.714569 | orchestrator | Friday 19 September 2025 11:36:57 +0000 (0:00:01.368) 0:05:03.026 ****** 2025-09-19 11:38:21.714587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:38:21.714593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:38:21.714599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:38:21.714605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:38:21.714634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:38:21.714641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:38:21.714647 | orchestrator | 2025-09-19 11:38:21.714652 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-19 11:38:21.714658 | orchestrator | Friday 19 September 2025 11:37:02 +0000 (0:00:05.458) 0:05:08.484 ****** 2025-09-19 11:38:21.714663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:38:21.714669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:38:21.714679 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.714687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:38:21.714705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:38:21.714711 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.714717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:38:21.714723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:38:21.714732 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.714737 | orchestrator | 2025-09-19 11:38:21.714743 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-19 11:38:21.714748 | orchestrator | Friday 19 September 2025 11:37:03 +0000 (0:00:00.712) 0:05:09.197 ****** 2025-09-19 11:38:21.714754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 11:38:21.714762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 11:38:21.714768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 11:38:21.714774 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.714779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 11:38:21.714795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 11:38:21.714802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 11:38:21.714807 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.714813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 11:38:21.714818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 11:38:21.714824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 11:38:21.714830 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.714835 | orchestrator | 2025-09-19 11:38:21.714840 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-19 11:38:21.714846 | orchestrator | Friday 19 September 2025 11:37:04 +0000 (0:00:00.947) 0:05:10.144 ****** 2025-09-19 11:38:21.714851 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.714857 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.714862 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.714867 | orchestrator | 2025-09-19 11:38:21.714873 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-19 11:38:21.714878 | orchestrator | Friday 19 September 2025 11:37:05 +0000 (0:00:00.826) 0:05:10.971 ****** 2025-09-19 11:38:21.714883 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.714889 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.714894 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.714900 | orchestrator | 2025-09-19 11:38:21.714905 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-19 11:38:21.714914 | orchestrator | Friday 19 September 2025 11:37:06 +0000 (0:00:01.337) 0:05:12.309 ****** 2025-09-19 11:38:21.714919 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.714925 | orchestrator | 2025-09-19 11:38:21.714930 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-19 11:38:21.714935 | orchestrator | Friday 19 September 2025 11:37:08 +0000 (0:00:01.421) 0:05:13.730 ****** 2025-09-19 11:38:21.714941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 11:38:21.714962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:38:21.714968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.714982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.714988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 11:38:21.714993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:38:21.715003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:38:21.715008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:38:21.715035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 11:38:21.715041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:38:21.715046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:38:21.715067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 11:38:21.715075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 11:38:21.715081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:38:21.715100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 11:38:21.715107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 11:38:21.715116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 11:38:21.715130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 11:38:21.715135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:38:21.715152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:38:21.715165 | orchestrator | 2025-09-19 11:38:21.715170 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-19 11:38:21.715175 | orchestrator | Friday 19 September 2025 11:37:12 +0000 (0:00:04.392) 0:05:18.123 ****** 2025-09-19 11:38:21.715180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 11:38:21.715188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:38:21.715193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:38:21.715213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 11:38:21.715219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 11:38:21.715227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:38:21.715242 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.715248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 11:38:21.715255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:38:21.715263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:38:21.715282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 11:38:21.715287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 11:38:21.715295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 11:38:21.715302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:38:21.715324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:38:21.715329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715334 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.715339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:38:21.715355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 11:38:21.715363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 11:38:21.715368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:38:21.715378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:38:21.715383 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.715388 | orchestrator | 2025-09-19 11:38:21.715393 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-19 11:38:21.715397 | orchestrator | Friday 19 September 2025 11:37:13 +0000 (0:00:01.212) 0:05:19.335 ****** 2025-09-19 11:38:21.715402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 11:38:21.715407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 11:38:21.715415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 11:38:21.715431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 11:38:21.715441 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.715446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 11:38:21.715457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 11:38:21.715462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 11:38:21.715467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 11:38:21.715472 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.715477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 11:38:21.715482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 11:38:21.715487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 11:38:21.715492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 11:38:21.715496 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.715501 | orchestrator | 2025-09-19 11:38:21.715506 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-19 11:38:21.715511 | orchestrator | Friday 19 September 2025 11:37:14 +0000 (0:00:01.047) 0:05:20.382 ****** 2025-09-19 11:38:21.715516 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.715521 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.715525 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.715530 | orchestrator | 2025-09-19 11:38:21.715535 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-19 11:38:21.715539 | orchestrator | Friday 19 September 2025 11:37:15 +0000 (0:00:00.445) 0:05:20.828 ****** 2025-09-19 11:38:21.715544 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.715549 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.715554 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.715558 | orchestrator | 2025-09-19 11:38:21.715563 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-19 11:38:21.715568 | orchestrator | Friday 19 September 2025 11:37:16 +0000 (0:00:01.608) 0:05:22.436 ****** 2025-09-19 11:38:21.715573 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.715578 | orchestrator | 2025-09-19 11:38:21.715582 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-19 11:38:21.715590 | orchestrator | Friday 19 September 2025 11:37:18 +0000 (0:00:01.775) 0:05:24.212 ****** 2025-09-19 11:38:21.715598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:38:21.715607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:38:21.715612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:38:21.715617 | orchestrator | 2025-09-19 11:38:21.715622 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-19 11:38:21.715627 | orchestrator | Friday 19 September 2025 11:37:21 +0000 (0:00:02.431) 0:05:26.644 ****** 2025-09-19 11:38:21.715632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 11:38:21.715640 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.715648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 11:38:21.715653 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.715661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 11:38:21.715667 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.715671 | orchestrator | 2025-09-19 11:38:21.715676 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-19 11:38:21.715681 | orchestrator | Friday 19 September 2025 11:37:21 +0000 (0:00:00.413) 0:05:27.057 ****** 2025-09-19 11:38:21.715686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 11:38:21.715690 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.715695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 11:38:21.715700 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.715705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 11:38:21.715710 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.715714 | orchestrator | 2025-09-19 11:38:21.715719 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-19 11:38:21.715724 | orchestrator | Friday 19 September 2025 11:37:22 +0000 (0:00:01.071) 0:05:28.128 ****** 2025-09-19 11:38:21.715729 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.715733 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.715738 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.715743 | orchestrator | 2025-09-19 11:38:21.715748 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-19 11:38:21.715756 | orchestrator | Friday 19 September 2025 11:37:23 +0000 (0:00:00.485) 0:05:28.614 ****** 2025-09-19 11:38:21.715761 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.715766 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.715770 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.715775 | orchestrator | 2025-09-19 11:38:21.715780 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-19 11:38:21.715785 | orchestrator | Friday 19 September 2025 11:37:24 +0000 (0:00:01.389) 0:05:30.003 ****** 2025-09-19 11:38:21.715789 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:38:21.715794 | orchestrator | 2025-09-19 11:38:21.715799 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-19 11:38:21.715803 | orchestrator | Friday 19 September 2025 11:37:26 +0000 (0:00:01.794) 0:05:31.798 ****** 2025-09-19 11:38:21.715811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.715819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.715825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.715830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.715838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.715846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 11:38:21.715851 | orchestrator | 2025-09-19 11:38:21.715859 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-19 11:38:21.715864 | orchestrator | Friday 19 September 2025 11:37:32 +0000 (0:00:06.566) 0:05:38.365 ****** 2025-09-19 11:38:21.715869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.715874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.715882 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.715887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.715895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.715900 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.715908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.715913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 11:38:21.715922 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.715927 | orchestrator | 2025-09-19 11:38:21.715932 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-19 11:38:21.715937 | orchestrator | Friday 19 September 2025 11:37:33 +0000 (0:00:00.679) 0:05:39.044 ****** 2025-09-19 11:38:21.715942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 11:38:21.715947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 11:38:21.715952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 11:38:21.715957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 11:38:21.715961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 11:38:21.715966 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.715971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 11:38:21.715978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 11:38:21.715983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 11:38:21.715988 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.715993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 11:38:21.716001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 11:38:21.716006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 11:38:21.716011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 11:38:21.716016 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.716024 | orchestrator | 2025-09-19 11:38:21.716029 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-19 11:38:21.716033 | orchestrator | Friday 19 September 2025 11:37:35 +0000 (0:00:01.700) 0:05:40.744 ****** 2025-09-19 11:38:21.716038 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.716043 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.716048 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.716053 | orchestrator | 2025-09-19 11:38:21.716058 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-19 11:38:21.716062 | orchestrator | Friday 19 September 2025 11:37:36 +0000 (0:00:01.397) 0:05:42.142 ****** 2025-09-19 11:38:21.716067 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.716072 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.716077 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.716081 | orchestrator | 2025-09-19 11:38:21.716086 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-19 11:38:21.716091 | orchestrator | Friday 19 September 2025 11:37:38 +0000 (0:00:02.109) 0:05:44.251 ****** 2025-09-19 11:38:21.716096 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.716100 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.716105 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.716110 | orchestrator | 2025-09-19 11:38:21.716115 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-19 11:38:21.716120 | orchestrator | Friday 19 September 2025 11:37:39 +0000 (0:00:00.358) 0:05:44.610 ****** 2025-09-19 11:38:21.716124 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.716129 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.716134 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.716139 | orchestrator | 2025-09-19 11:38:21.716143 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-19 11:38:21.716148 | orchestrator | Friday 19 September 2025 11:37:39 +0000 (0:00:00.327) 0:05:44.938 ****** 2025-09-19 11:38:21.716153 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.716158 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.716162 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.716167 | orchestrator | 2025-09-19 11:38:21.716172 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-19 11:38:21.716177 | orchestrator | Friday 19 September 2025 11:37:40 +0000 (0:00:00.592) 0:05:45.530 ****** 2025-09-19 11:38:21.716181 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.716186 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.716191 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.716196 | orchestrator | 2025-09-19 11:38:21.716201 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-19 11:38:21.716205 | orchestrator | Friday 19 September 2025 11:37:40 +0000 (0:00:00.341) 0:05:45.871 ****** 2025-09-19 11:38:21.716210 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.716215 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.716220 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.716224 | orchestrator | 2025-09-19 11:38:21.716229 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-19 11:38:21.716234 | orchestrator | Friday 19 September 2025 11:37:40 +0000 (0:00:00.338) 0:05:46.210 ****** 2025-09-19 11:38:21.716239 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.716243 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.716248 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.716253 | orchestrator | 2025-09-19 11:38:21.716258 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-19 11:38:21.716262 | orchestrator | Friday 19 September 2025 11:37:41 +0000 (0:00:00.817) 0:05:47.027 ****** 2025-09-19 11:38:21.716267 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:38:21.716272 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:38:21.716277 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:38:21.716281 | orchestrator | 2025-09-19 11:38:21.716286 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-19 11:38:21.716294 | orchestrator | Friday 19 September 2025 11:37:42 +0000 (0:00:00.712) 0:05:47.740 ****** 2025-09-19 11:38:21.716299 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:38:21.716303 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:38:21.716308 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:38:21.716313 | orchestrator | 2025-09-19 11:38:21.716320 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-19 11:38:21.716325 | orchestrator | Friday 19 September 2025 11:37:42 +0000 (0:00:00.345) 0:05:48.085 ****** 2025-09-19 11:38:21.716330 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:38:21.716334 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:38:21.716339 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:38:21.716344 | orchestrator | 2025-09-19 11:38:21.716348 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-19 11:38:21.716353 | orchestrator | Friday 19 September 2025 11:37:43 +0000 (0:00:00.883) 0:05:48.969 ****** 2025-09-19 11:38:21.716358 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:38:21.716363 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:38:21.716367 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:38:21.716372 | orchestrator | 2025-09-19 11:38:21.716377 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-19 11:38:21.716382 | orchestrator | Friday 19 September 2025 11:37:44 +0000 (0:00:01.178) 0:05:50.148 ****** 2025-09-19 11:38:21.716386 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:38:21.716391 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:38:21.716399 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:38:21.716403 | orchestrator | 2025-09-19 11:38:21.716408 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-19 11:38:21.716413 | orchestrator | Friday 19 September 2025 11:37:45 +0000 (0:00:00.919) 0:05:51.067 ****** 2025-09-19 11:38:21.716430 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.716435 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.716440 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.716445 | orchestrator | 2025-09-19 11:38:21.716450 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-19 11:38:21.716455 | orchestrator | Friday 19 September 2025 11:37:55 +0000 (0:00:09.748) 0:06:00.815 ****** 2025-09-19 11:38:21.716460 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:38:21.716464 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:38:21.716469 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:38:21.716474 | orchestrator | 2025-09-19 11:38:21.716479 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-19 11:38:21.716484 | orchestrator | Friday 19 September 2025 11:37:56 +0000 (0:00:00.846) 0:06:01.662 ****** 2025-09-19 11:38:21.716488 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.716493 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.716498 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.716503 | orchestrator | 2025-09-19 11:38:21.716507 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-19 11:38:21.716512 | orchestrator | Friday 19 September 2025 11:38:04 +0000 (0:00:08.673) 0:06:10.336 ****** 2025-09-19 11:38:21.716517 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:38:21.716522 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:38:21.716527 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:38:21.716532 | orchestrator | 2025-09-19 11:38:21.716536 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-19 11:38:21.716541 | orchestrator | Friday 19 September 2025 11:38:07 +0000 (0:00:03.147) 0:06:13.483 ****** 2025-09-19 11:38:21.716546 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:38:21.716551 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:38:21.716555 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:38:21.716560 | orchestrator | 2025-09-19 11:38:21.716565 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-19 11:38:21.716570 | orchestrator | Friday 19 September 2025 11:38:12 +0000 (0:00:04.328) 0:06:17.811 ****** 2025-09-19 11:38:21.716578 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.716583 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.716588 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.716592 | orchestrator | 2025-09-19 11:38:21.716597 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-19 11:38:21.716602 | orchestrator | Friday 19 September 2025 11:38:12 +0000 (0:00:00.362) 0:06:18.174 ****** 2025-09-19 11:38:21.716607 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.716611 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.716616 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.716621 | orchestrator | 2025-09-19 11:38:21.716626 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-19 11:38:21.716630 | orchestrator | Friday 19 September 2025 11:38:13 +0000 (0:00:00.351) 0:06:18.525 ****** 2025-09-19 11:38:21.716635 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.716640 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.716645 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.716649 | orchestrator | 2025-09-19 11:38:21.716654 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-19 11:38:21.716659 | orchestrator | Friday 19 September 2025 11:38:13 +0000 (0:00:00.653) 0:06:19.179 ****** 2025-09-19 11:38:21.716664 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.716668 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.716673 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.716678 | orchestrator | 2025-09-19 11:38:21.716683 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-19 11:38:21.716687 | orchestrator | Friday 19 September 2025 11:38:14 +0000 (0:00:00.358) 0:06:19.538 ****** 2025-09-19 11:38:21.716693 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.716697 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.716702 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.716707 | orchestrator | 2025-09-19 11:38:21.716712 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-19 11:38:21.716717 | orchestrator | Friday 19 September 2025 11:38:14 +0000 (0:00:00.365) 0:06:19.903 ****** 2025-09-19 11:38:21.716721 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:38:21.716726 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:38:21.716731 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:38:21.716735 | orchestrator | 2025-09-19 11:38:21.716740 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-19 11:38:21.716745 | orchestrator | Friday 19 September 2025 11:38:14 +0000 (0:00:00.357) 0:06:20.261 ****** 2025-09-19 11:38:21.716750 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:38:21.716755 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:38:21.716759 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:38:21.716764 | orchestrator | 2025-09-19 11:38:21.716769 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-19 11:38:21.716776 | orchestrator | Friday 19 September 2025 11:38:19 +0000 (0:00:05.204) 0:06:25.466 ****** 2025-09-19 11:38:21.716781 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:38:21.716786 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:38:21.716791 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:38:21.716795 | orchestrator | 2025-09-19 11:38:21.716800 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:38:21.716805 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 11:38:21.716810 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 11:38:21.716815 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 11:38:21.716820 | orchestrator | 2025-09-19 11:38:21.716828 | orchestrator | 2025-09-19 11:38:21.716835 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:38:21.716840 | orchestrator | Friday 19 September 2025 11:38:20 +0000 (0:00:00.845) 0:06:26.311 ****** 2025-09-19 11:38:21.716845 | orchestrator | =============================================================================== 2025-09-19 11:38:21.716850 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.75s 2025-09-19 11:38:21.716855 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.67s 2025-09-19 11:38:21.716859 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.57s 2025-09-19 11:38:21.716864 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.37s 2025-09-19 11:38:21.716869 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.87s 2025-09-19 11:38:21.716874 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.52s 2025-09-19 11:38:21.716879 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.46s 2025-09-19 11:38:21.716883 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.20s 2025-09-19 11:38:21.716888 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.56s 2025-09-19 11:38:21.716893 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.48s 2025-09-19 11:38:21.716898 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.39s 2025-09-19 11:38:21.716902 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.33s 2025-09-19 11:38:21.716907 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.31s 2025-09-19 11:38:21.716912 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.30s 2025-09-19 11:38:21.716917 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.19s 2025-09-19 11:38:21.716922 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.15s 2025-09-19 11:38:21.716926 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.00s 2025-09-19 11:38:21.716931 | orchestrator | loadbalancer : Ensuring proxysql service config subdirectories exist ---- 3.92s 2025-09-19 11:38:21.716936 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.88s 2025-09-19 11:38:21.716941 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.79s 2025-09-19 11:38:24.732231 | orchestrator | 2025-09-19 11:38:24 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:38:24.732858 | orchestrator | 2025-09-19 11:38:24 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:24.737471 | orchestrator | 2025-09-19 11:38:24 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:38:24.737533 | orchestrator | 2025-09-19 11:38:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:27.779566 | orchestrator | 2025-09-19 11:38:27 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:38:27.780013 | orchestrator | 2025-09-19 11:38:27 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:27.780529 | orchestrator | 2025-09-19 11:38:27 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:38:27.780798 | orchestrator | 2025-09-19 11:38:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:30.816275 | orchestrator | 2025-09-19 11:38:30 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:38:30.819643 | orchestrator | 2025-09-19 11:38:30 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:30.821492 | orchestrator | 2025-09-19 11:38:30 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:38:30.821554 | orchestrator | 2025-09-19 11:38:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:33.859028 | orchestrator | 2025-09-19 11:38:33 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:38:33.859324 | orchestrator | 2025-09-19 11:38:33 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:33.859941 | orchestrator | 2025-09-19 11:38:33 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:38:33.860044 | orchestrator | 2025-09-19 11:38:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:36.901588 | orchestrator | 2025-09-19 11:38:36 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:38:36.902012 | orchestrator | 2025-09-19 11:38:36 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:36.903996 | orchestrator | 2025-09-19 11:38:36 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:38:36.904049 | orchestrator | 2025-09-19 11:38:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:39.946255 | orchestrator | 2025-09-19 11:38:39 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:38:39.946609 | orchestrator | 2025-09-19 11:38:39 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:39.947297 | orchestrator | 2025-09-19 11:38:39 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:38:39.947326 | orchestrator | 2025-09-19 11:38:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:42.988168 | orchestrator | 2025-09-19 11:38:42 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:38:42.988856 | orchestrator | 2025-09-19 11:38:42 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:42.989993 | orchestrator | 2025-09-19 11:38:42 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:38:42.990068 | orchestrator | 2025-09-19 11:38:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:46.031830 | orchestrator | 2025-09-19 11:38:46 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:38:46.034793 | orchestrator | 2025-09-19 11:38:46 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:46.036346 | orchestrator | 2025-09-19 11:38:46 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:38:46.036800 | orchestrator | 2025-09-19 11:38:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:49.069842 | orchestrator | 2025-09-19 11:38:49 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:38:49.070264 | orchestrator | 2025-09-19 11:38:49 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:49.071477 | orchestrator | 2025-09-19 11:38:49 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:38:49.071497 | orchestrator | 2025-09-19 11:38:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:52.104529 | orchestrator | 2025-09-19 11:38:52 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:38:52.104853 | orchestrator | 2025-09-19 11:38:52 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:52.105512 | orchestrator | 2025-09-19 11:38:52 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:38:52.105799 | orchestrator | 2025-09-19 11:38:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:55.133999 | orchestrator | 2025-09-19 11:38:55 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:38:55.136253 | orchestrator | 2025-09-19 11:38:55 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:55.138142 | orchestrator | 2025-09-19 11:38:55 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:38:55.138167 | orchestrator | 2025-09-19 11:38:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:58.180107 | orchestrator | 2025-09-19 11:38:58 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:38:58.181925 | orchestrator | 2025-09-19 11:38:58 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:38:58.184025 | orchestrator | 2025-09-19 11:38:58 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:38:58.184254 | orchestrator | 2025-09-19 11:38:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:01.226901 | orchestrator | 2025-09-19 11:39:01 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:01.228154 | orchestrator | 2025-09-19 11:39:01 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:01.230085 | orchestrator | 2025-09-19 11:39:01 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:01.230584 | orchestrator | 2025-09-19 11:39:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:04.293490 | orchestrator | 2025-09-19 11:39:04 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:04.296262 | orchestrator | 2025-09-19 11:39:04 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:04.298302 | orchestrator | 2025-09-19 11:39:04 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:04.298338 | orchestrator | 2025-09-19 11:39:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:07.336136 | orchestrator | 2025-09-19 11:39:07 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:07.338275 | orchestrator | 2025-09-19 11:39:07 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:07.340195 | orchestrator | 2025-09-19 11:39:07 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:07.340223 | orchestrator | 2025-09-19 11:39:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:10.388852 | orchestrator | 2025-09-19 11:39:10 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:10.389004 | orchestrator | 2025-09-19 11:39:10 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:10.389078 | orchestrator | 2025-09-19 11:39:10 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:10.389237 | orchestrator | 2025-09-19 11:39:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:13.425224 | orchestrator | 2025-09-19 11:39:13 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:13.427146 | orchestrator | 2025-09-19 11:39:13 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:13.428895 | orchestrator | 2025-09-19 11:39:13 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:13.429323 | orchestrator | 2025-09-19 11:39:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:16.479316 | orchestrator | 2025-09-19 11:39:16 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:16.480863 | orchestrator | 2025-09-19 11:39:16 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:16.483096 | orchestrator | 2025-09-19 11:39:16 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:16.483140 | orchestrator | 2025-09-19 11:39:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:19.527966 | orchestrator | 2025-09-19 11:39:19 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:19.530258 | orchestrator | 2025-09-19 11:39:19 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:19.532266 | orchestrator | 2025-09-19 11:39:19 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:19.532312 | orchestrator | 2025-09-19 11:39:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:22.578242 | orchestrator | 2025-09-19 11:39:22 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:22.582278 | orchestrator | 2025-09-19 11:39:22 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:22.583451 | orchestrator | 2025-09-19 11:39:22 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:22.583576 | orchestrator | 2025-09-19 11:39:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:25.618681 | orchestrator | 2025-09-19 11:39:25 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:25.620346 | orchestrator | 2025-09-19 11:39:25 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:25.622918 | orchestrator | 2025-09-19 11:39:25 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:25.622953 | orchestrator | 2025-09-19 11:39:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:28.680766 | orchestrator | 2025-09-19 11:39:28 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:28.687463 | orchestrator | 2025-09-19 11:39:28 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:28.689146 | orchestrator | 2025-09-19 11:39:28 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:28.689229 | orchestrator | 2025-09-19 11:39:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:31.727870 | orchestrator | 2025-09-19 11:39:31 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:31.727971 | orchestrator | 2025-09-19 11:39:31 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:31.727987 | orchestrator | 2025-09-19 11:39:31 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:31.728000 | orchestrator | 2025-09-19 11:39:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:34.778293 | orchestrator | 2025-09-19 11:39:34 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:34.779754 | orchestrator | 2025-09-19 11:39:34 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:34.782420 | orchestrator | 2025-09-19 11:39:34 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:34.782457 | orchestrator | 2025-09-19 11:39:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:37.820490 | orchestrator | 2025-09-19 11:39:37 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:37.821582 | orchestrator | 2025-09-19 11:39:37 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:37.821980 | orchestrator | 2025-09-19 11:39:37 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:37.822008 | orchestrator | 2025-09-19 11:39:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:40.862239 | orchestrator | 2025-09-19 11:39:40 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:40.864322 | orchestrator | 2025-09-19 11:39:40 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:40.866176 | orchestrator | 2025-09-19 11:39:40 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:40.866218 | orchestrator | 2025-09-19 11:39:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:43.939691 | orchestrator | 2025-09-19 11:39:43 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:43.939823 | orchestrator | 2025-09-19 11:39:43 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:43.939848 | orchestrator | 2025-09-19 11:39:43 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:43.939869 | orchestrator | 2025-09-19 11:39:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:46.960024 | orchestrator | 2025-09-19 11:39:46 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:46.960889 | orchestrator | 2025-09-19 11:39:46 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:46.965404 | orchestrator | 2025-09-19 11:39:46 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:46.966390 | orchestrator | 2025-09-19 11:39:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:50.007065 | orchestrator | 2025-09-19 11:39:50 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:50.007234 | orchestrator | 2025-09-19 11:39:50 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:50.008772 | orchestrator | 2025-09-19 11:39:50 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:50.008797 | orchestrator | 2025-09-19 11:39:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:53.046503 | orchestrator | 2025-09-19 11:39:53 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:53.047952 | orchestrator | 2025-09-19 11:39:53 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:53.050068 | orchestrator | 2025-09-19 11:39:53 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:53.050097 | orchestrator | 2025-09-19 11:39:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:56.095683 | orchestrator | 2025-09-19 11:39:56 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:56.097046 | orchestrator | 2025-09-19 11:39:56 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:56.098426 | orchestrator | 2025-09-19 11:39:56 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:56.098459 | orchestrator | 2025-09-19 11:39:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:59.142853 | orchestrator | 2025-09-19 11:39:59 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:39:59.144981 | orchestrator | 2025-09-19 11:39:59 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:39:59.146945 | orchestrator | 2025-09-19 11:39:59 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:39:59.146998 | orchestrator | 2025-09-19 11:39:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:02.189997 | orchestrator | 2025-09-19 11:40:02 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:02.191843 | orchestrator | 2025-09-19 11:40:02 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:40:02.194182 | orchestrator | 2025-09-19 11:40:02 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:02.194253 | orchestrator | 2025-09-19 11:40:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:05.234241 | orchestrator | 2025-09-19 11:40:05 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:05.235481 | orchestrator | 2025-09-19 11:40:05 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:40:05.237177 | orchestrator | 2025-09-19 11:40:05 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:05.237212 | orchestrator | 2025-09-19 11:40:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:08.282650 | orchestrator | 2025-09-19 11:40:08 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:08.284220 | orchestrator | 2025-09-19 11:40:08 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:40:08.286381 | orchestrator | 2025-09-19 11:40:08 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:08.286489 | orchestrator | 2025-09-19 11:40:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:11.336577 | orchestrator | 2025-09-19 11:40:11 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:11.338230 | orchestrator | 2025-09-19 11:40:11 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:40:11.340009 | orchestrator | 2025-09-19 11:40:11 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:11.340162 | orchestrator | 2025-09-19 11:40:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:14.384261 | orchestrator | 2025-09-19 11:40:14 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:14.385288 | orchestrator | 2025-09-19 11:40:14 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:40:14.386295 | orchestrator | 2025-09-19 11:40:14 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:14.386380 | orchestrator | 2025-09-19 11:40:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:17.418216 | orchestrator | 2025-09-19 11:40:17 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:17.419483 | orchestrator | 2025-09-19 11:40:17 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:40:17.420551 | orchestrator | 2025-09-19 11:40:17 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:17.420616 | orchestrator | 2025-09-19 11:40:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:20.471519 | orchestrator | 2025-09-19 11:40:20 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:20.472539 | orchestrator | 2025-09-19 11:40:20 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:40:20.474381 | orchestrator | 2025-09-19 11:40:20 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:20.475977 | orchestrator | 2025-09-19 11:40:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:23.522414 | orchestrator | 2025-09-19 11:40:23 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:23.523926 | orchestrator | 2025-09-19 11:40:23 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state STARTED 2025-09-19 11:40:23.526219 | orchestrator | 2025-09-19 11:40:23 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:23.526295 | orchestrator | 2025-09-19 11:40:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:26.574084 | orchestrator | 2025-09-19 11:40:26 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:26.580629 | orchestrator | 2025-09-19 11:40:26 | INFO  | Task d76c182b-9a70-4d6f-8aec-3a3b14bd656f is in state SUCCESS 2025-09-19 11:40:26.581979 | orchestrator | 2025-09-19 11:40:26.582056 | orchestrator | 2025-09-19 11:40:26.582102 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-19 11:40:26.582115 | orchestrator | 2025-09-19 11:40:26.582126 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-19 11:40:26.582137 | orchestrator | Friday 19 September 2025 11:29:39 +0000 (0:00:00.721) 0:00:00.721 ****** 2025-09-19 11:40:26.582149 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.582161 | orchestrator | 2025-09-19 11:40:26.582172 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-19 11:40:26.582256 | orchestrator | Friday 19 September 2025 11:29:39 +0000 (0:00:00.878) 0:00:01.599 ****** 2025-09-19 11:40:26.582274 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.582293 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.582478 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.582496 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.582508 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.582518 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.582529 | orchestrator | 2025-09-19 11:40:26.582540 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-19 11:40:26.582551 | orchestrator | Friday 19 September 2025 11:29:41 +0000 (0:00:01.626) 0:00:03.226 ****** 2025-09-19 11:40:26.582562 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.582573 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.582584 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.582598 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.582610 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.582622 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.582634 | orchestrator | 2025-09-19 11:40:26.582646 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-19 11:40:26.582658 | orchestrator | Friday 19 September 2025 11:29:42 +0000 (0:00:01.127) 0:00:04.353 ****** 2025-09-19 11:40:26.582670 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.582683 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.582730 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.582743 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.582755 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.582767 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.582824 | orchestrator | 2025-09-19 11:40:26.582837 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-19 11:40:26.582850 | orchestrator | Friday 19 September 2025 11:29:43 +0000 (0:00:00.807) 0:00:05.161 ****** 2025-09-19 11:40:26.582862 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.582873 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.582885 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.582897 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.582930 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.582943 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.582955 | orchestrator | 2025-09-19 11:40:26.582967 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-19 11:40:26.582978 | orchestrator | Friday 19 September 2025 11:29:44 +0000 (0:00:00.674) 0:00:05.835 ****** 2025-09-19 11:40:26.582988 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.582999 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.583009 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.583020 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.583096 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.583109 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.583119 | orchestrator | 2025-09-19 11:40:26.583130 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-19 11:40:26.583141 | orchestrator | Friday 19 September 2025 11:29:44 +0000 (0:00:00.564) 0:00:06.400 ****** 2025-09-19 11:40:26.583151 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.583162 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.583172 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.583183 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.583193 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.583204 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.583274 | orchestrator | 2025-09-19 11:40:26.583318 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-19 11:40:26.583340 | orchestrator | Friday 19 September 2025 11:29:45 +0000 (0:00:01.021) 0:00:07.421 ****** 2025-09-19 11:40:26.583360 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.583379 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.583455 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.583467 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.583477 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.583488 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.583499 | orchestrator | 2025-09-19 11:40:26.583510 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-19 11:40:26.583520 | orchestrator | Friday 19 September 2025 11:29:46 +0000 (0:00:00.881) 0:00:08.302 ****** 2025-09-19 11:40:26.583531 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.583542 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.583553 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.583563 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.583574 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.583584 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.583618 | orchestrator | 2025-09-19 11:40:26.583630 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-19 11:40:26.583641 | orchestrator | Friday 19 September 2025 11:29:47 +0000 (0:00:01.032) 0:00:09.335 ****** 2025-09-19 11:40:26.583664 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 11:40:26.583675 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:40:26.583686 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:40:26.583697 | orchestrator | 2025-09-19 11:40:26.583707 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-19 11:40:26.583718 | orchestrator | Friday 19 September 2025 11:29:48 +0000 (0:00:00.962) 0:00:10.297 ****** 2025-09-19 11:40:26.583761 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.583773 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.583784 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.583794 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.583805 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.583815 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.583826 | orchestrator | 2025-09-19 11:40:26.583913 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-19 11:40:26.583926 | orchestrator | Friday 19 September 2025 11:29:49 +0000 (0:00:00.896) 0:00:11.194 ****** 2025-09-19 11:40:26.583948 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 11:40:26.583959 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:40:26.583970 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:40:26.583981 | orchestrator | 2025-09-19 11:40:26.583991 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-19 11:40:26.584002 | orchestrator | Friday 19 September 2025 11:29:52 +0000 (0:00:03.260) 0:00:14.454 ****** 2025-09-19 11:40:26.584104 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 11:40:26.584116 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 11:40:26.584127 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 11:40:26.584138 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.584149 | orchestrator | 2025-09-19 11:40:26.584160 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-19 11:40:26.584171 | orchestrator | Friday 19 September 2025 11:29:53 +0000 (0:00:00.736) 0:00:15.190 ****** 2025-09-19 11:40:26.584184 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.584197 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.584208 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.584241 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.584254 | orchestrator | 2025-09-19 11:40:26.584271 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-19 11:40:26.584289 | orchestrator | Friday 19 September 2025 11:29:55 +0000 (0:00:01.597) 0:00:16.788 ****** 2025-09-19 11:40:26.584331 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.584352 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.584371 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.584390 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.584410 | orchestrator | 2025-09-19 11:40:26.584429 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-19 11:40:26.584447 | orchestrator | Friday 19 September 2025 11:29:55 +0000 (0:00:00.269) 0:00:17.057 ****** 2025-09-19 11:40:26.584470 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-19 11:29:50.187276', 'end': '2025-09-19 11:29:50.519802', 'delta': '0:00:00.332526', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.584492 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-19 11:29:51.372989', 'end': '2025-09-19 11:29:51.648998', 'delta': '0:00:00.276009', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.584543 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-19 11:29:52.117588', 'end': '2025-09-19 11:29:52.426250', 'delta': '0:00:00.308662', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.584555 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.584565 | orchestrator | 2025-09-19 11:40:26.584576 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-19 11:40:26.584587 | orchestrator | Friday 19 September 2025 11:29:55 +0000 (0:00:00.381) 0:00:17.439 ****** 2025-09-19 11:40:26.584598 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.584609 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.584620 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.584631 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.584641 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.584652 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.584663 | orchestrator | 2025-09-19 11:40:26.584673 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-19 11:40:26.584684 | orchestrator | Friday 19 September 2025 11:29:57 +0000 (0:00:01.873) 0:00:19.312 ****** 2025-09-19 11:40:26.584695 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:40:26.584706 | orchestrator | 2025-09-19 11:40:26.584916 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-19 11:40:26.584940 | orchestrator | Friday 19 September 2025 11:29:58 +0000 (0:00:00.759) 0:00:20.072 ****** 2025-09-19 11:40:26.584951 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.584962 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.584972 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.584983 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.584994 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.585004 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.585015 | orchestrator | 2025-09-19 11:40:26.585026 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-19 11:40:26.585037 | orchestrator | Friday 19 September 2025 11:29:59 +0000 (0:00:01.119) 0:00:21.191 ****** 2025-09-19 11:40:26.585056 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.585104 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.585115 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.585126 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.585136 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.585147 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.585157 | orchestrator | 2025-09-19 11:40:26.585168 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 11:40:26.585179 | orchestrator | Friday 19 September 2025 11:30:00 +0000 (0:00:01.205) 0:00:22.397 ****** 2025-09-19 11:40:26.585190 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.585200 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.585210 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.585221 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.585231 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.585242 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.585255 | orchestrator | 2025-09-19 11:40:26.585274 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-19 11:40:26.585292 | orchestrator | Friday 19 September 2025 11:30:01 +0000 (0:00:01.087) 0:00:23.485 ****** 2025-09-19 11:40:26.585333 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.585353 | orchestrator | 2025-09-19 11:40:26.585380 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-19 11:40:26.585399 | orchestrator | Friday 19 September 2025 11:30:02 +0000 (0:00:00.316) 0:00:23.801 ****** 2025-09-19 11:40:26.585411 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.585421 | orchestrator | 2025-09-19 11:40:26.585432 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 11:40:26.585443 | orchestrator | Friday 19 September 2025 11:30:02 +0000 (0:00:00.304) 0:00:24.106 ****** 2025-09-19 11:40:26.585453 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.585464 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.585475 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.585570 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.585584 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.585595 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.585606 | orchestrator | 2025-09-19 11:40:26.585627 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-19 11:40:26.585638 | orchestrator | Friday 19 September 2025 11:30:03 +0000 (0:00:00.660) 0:00:24.766 ****** 2025-09-19 11:40:26.585649 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.585660 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.585774 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.585794 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.585813 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.585831 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.585899 | orchestrator | 2025-09-19 11:40:26.585916 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-19 11:40:26.585926 | orchestrator | Friday 19 September 2025 11:30:03 +0000 (0:00:00.779) 0:00:25.545 ****** 2025-09-19 11:40:26.585937 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.585948 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.585959 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.585969 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.585980 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.585990 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.586001 | orchestrator | 2025-09-19 11:40:26.586012 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-19 11:40:26.586102 | orchestrator | Friday 19 September 2025 11:30:04 +0000 (0:00:00.654) 0:00:26.200 ****** 2025-09-19 11:40:26.586114 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.586124 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.586135 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.586154 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.586165 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.586176 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.586187 | orchestrator | 2025-09-19 11:40:26.586198 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-19 11:40:26.586208 | orchestrator | Friday 19 September 2025 11:30:05 +0000 (0:00:00.771) 0:00:26.971 ****** 2025-09-19 11:40:26.586219 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.586230 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.586241 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.586251 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.586262 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.586370 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.586382 | orchestrator | 2025-09-19 11:40:26.586393 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-19 11:40:26.586404 | orchestrator | Friday 19 September 2025 11:30:06 +0000 (0:00:00.824) 0:00:27.796 ****** 2025-09-19 11:40:26.586415 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.586426 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.586437 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.586448 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.586458 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.586467 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.586477 | orchestrator | 2025-09-19 11:40:26.586487 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-19 11:40:26.586497 | orchestrator | Friday 19 September 2025 11:30:07 +0000 (0:00:00.939) 0:00:28.735 ****** 2025-09-19 11:40:26.586506 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.586516 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.586526 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.586535 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.586545 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.586554 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.586564 | orchestrator | 2025-09-19 11:40:26.586574 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-19 11:40:26.586583 | orchestrator | Friday 19 September 2025 11:30:07 +0000 (0:00:00.515) 0:00:29.251 ****** 2025-09-19 11:40:26.586595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c75d7215--6866--5647--89df--878c4666c32d-osd--block--c75d7215--6866--5647--89df--878c4666c32d', 'dm-uuid-LVM-1X5jOw5YrOpdBZp1inS61cY4IZgr0qkbS00YRoEWqLvmSH5VCp59bD9C5gLTzCR0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b93a97a3--21ec--5dc9--a656--27e3bfc6d1b0-osd--block--b93a97a3--21ec--5dc9--a656--27e3bfc6d1b0', 'dm-uuid-LVM-bToMsaMj4RbkRV92dGYGektzmUyq84td1UhSOMqph4YGMZkUxddkOkY7ZYKExd3d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part1', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part14', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part15', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part16', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.586758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c75d7215--6866--5647--89df--878c4666c32d-osd--block--c75d7215--6866--5647--89df--878c4666c32d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YU1Dvu-xG3I-AwmX-XQC5-6YUC-aBPC-2Y3aoD', 'scsi-0QEMU_QEMU_HARDDISK_adddc9ff-e41b-477e-a261-fe5fa77d3a0f', 'scsi-SQEMU_QEMU_HARDDISK_adddc9ff-e41b-477e-a261-fe5fa77d3a0f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.586769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b93a97a3--21ec--5dc9--a656--27e3bfc6d1b0-osd--block--b93a97a3--21ec--5dc9--a656--27e3bfc6d1b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jUBdC2-LpG7-omzw-GYkc-VKfE-4FdU-CFyZep', 'scsi-0QEMU_QEMU_HARDDISK_93b11a5e-f517-4b3c-9813-3ed2f0fa6238', 'scsi-SQEMU_QEMU_HARDDISK_93b11a5e-f517-4b3c-9813-3ed2f0fa6238'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.586780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ac676d1d--4f4c--546f--a12f--f85171bcd1d7-osd--block--ac676d1d--4f4c--546f--a12f--f85171bcd1d7', 'dm-uuid-LVM-UX7zUPNGiW0Fz1MJHY71fwZ6QYfyKwS9XvDKSKF0EM6OSh31mH04XGsl3daKj1BL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ffd16df6--6207--59ff--a831--a7eb6df6d5c2-osd--block--ffd16df6--6207--59ff--a831--a7eb6df6d5c2', 'dm-uuid-LVM-K0PDPI4eASPQXfjB6Qa1kDA6gSTSFdCfwq1XGiLdA2E0nTHZl08q1XXALebICKB1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53ba9bad-d72e-4bb6-9573-8eecfdb7d8b6', 'scsi-SQEMU_QEMU_HARDDISK_53ba9bad-d72e-4bb6-9573-8eecfdb7d8b6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.586840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.586850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9d0af248--3195--52cb--bed6--977ad9e4ee39-osd--block--9d0af248--3195--52cb--bed6--977ad9e4ee39', 'dm-uuid-LVM-dTXFflCdQ7PBCUHBj3A63R0WdXnAsDdED3r94jEdLUDrw7CrZG4kzyYjPZEyfmxk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6e702043--5e82--5f33--ad25--d539496f9fd1-osd--block--6e702043--5e82--5f33--ad25--d539496f9fd1', 'dm-uuid-LVM-YIFZjCsRr7JIF9aCqwtdyN5XmPO2pj6JRCAnTvD3ltEse3AM0y6TFaBey5rpAVXi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.586987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part1', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part14', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part15', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part16', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.587006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ac676d1d--4f4c--546f--a12f--f85171bcd1d7-osd--block--ac676d1d--4f4c--546f--a12f--f85171bcd1d7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Qtr002-FGlN-pk9H-NbNC-e6y9-NFqg-3tsncr', 'scsi-0QEMU_QEMU_HARDDISK_b4727c68-ff73-4ff9-aa8c-694157ecb2dd', 'scsi-SQEMU_QEMU_HARDDISK_b4727c68-ff73-4ff9-aa8c-694157ecb2dd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.587027 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587037 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.587047 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ffd16df6--6207--59ff--a831--a7eb6df6d5c2-osd--block--ffd16df6--6207--59ff--a831--a7eb6df6d5c2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0B1f4w-AsFN-VTXc-1xv7-VN32-2REQ-2o6M9o', 'scsi-0QEMU_QEMU_HARDDISK_39dbe9ae-8bf0-4e12-9ca8-c59aebdbd1f7', 'scsi-SQEMU_QEMU_HARDDISK_39dbe9ae-8bf0-4e12-9ca8-c59aebdbd1f7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.587058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3322ab10-28f2-47f3-9821-bfcea3cb9d1d', 'scsi-SQEMU_QEMU_HARDDISK_3322ab10-28f2-47f3-9821-bfcea3cb9d1d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.587081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.587109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.587166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9d0af248--3195--52cb--bed6--977ad9e4ee39-osd--block--9d0af248--3195--52cb--bed6--977ad9e4ee39'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HE7Sp2-tIYZ-dcwg-7eMf-hWHx-qJLn-ck38ib', 'scsi-0QEMU_QEMU_HARDDISK_14764732-c430-42d5-be90-4134a981fa59', 'scsi-SQEMU_QEMU_HARDDISK_14764732-c430-42d5-be90-4134a981fa59'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.587177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6e702043--5e82--5f33--ad25--d539496f9fd1-osd--block--6e702043--5e82--5f33--ad25--d539496f9fd1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ag9xcB-1iLg-l4WH-1JOO-W30A-gWpl-0b8RtB', 'scsi-0QEMU_QEMU_HARDDISK_02d4d70c-9632-40cc-9453-c0d53d6148ed', 'scsi-SQEMU_QEMU_HARDDISK_02d4d70c-9632-40cc-9453-c0d53d6148ed'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.587188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29dd875d-2efb-4f11-ac43-6353645f7e36', 'scsi-SQEMU_QEMU_HARDDISK_29dd875d-2efb-4f11-ac43-6353645f7e36'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.587198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.587237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace', 'scsi-SQEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace-part1', 'scsi-SQEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace-part14', 'scsi-SQEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace-part15', 'scsi-SQEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace-part16', 'scsi-SQEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.587395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.587415 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.587433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587483 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.587500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587556 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.587577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07', 'scsi-SQEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07-part1', 'scsi-SQEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07-part14', 'scsi-SQEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07-part15', 'scsi-SQEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07-part16', 'scsi-SQEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.587589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.587605 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.587615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:40:26.587711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7', 'scsi-SQEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7-part1', 'scsi-SQEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7-part14', 'scsi-SQEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7-part15', 'scsi-SQEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7-part16', 'scsi-SQEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.587733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:40:26.587743 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.587753 | orchestrator | 2025-09-19 11:40:26.587762 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-19 11:40:26.587772 | orchestrator | Friday 19 September 2025 11:30:08 +0000 (0:00:00.974) 0:00:30.225 ****** 2025-09-19 11:40:26.587783 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c75d7215--6866--5647--89df--878c4666c32d-osd--block--c75d7215--6866--5647--89df--878c4666c32d', 'dm-uuid-LVM-1X5jOw5YrOpdBZp1inS61cY4IZgr0qkbS00YRoEWqLvmSH5VCp59bD9C5gLTzCR0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.587794 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b93a97a3--21ec--5dc9--a656--27e3bfc6d1b0-osd--block--b93a97a3--21ec--5dc9--a656--27e3bfc6d1b0', 'dm-uuid-LVM-bToMsaMj4RbkRV92dGYGektzmUyq84td1UhSOMqph4YGMZkUxddkOkY7ZYKExd3d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.587811 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.587821 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.587840 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.587866 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.587884 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.587901 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.587927 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.587944 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.587962 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ac676d1d--4f4c--546f--a12f--f85171bcd1d7-osd--block--ac676d1d--4f4c--546f--a12f--f85171bcd1d7', 'dm-uuid-LVM-UX7zUPNGiW0Fz1MJHY71fwZ6QYfyKwS9XvDKSKF0EM6OSh31mH04XGsl3daKj1BL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.587999 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part1', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part14', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part15', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part16', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588029 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ffd16df6--6207--59ff--a831--a7eb6df6d5c2-osd--block--ffd16df6--6207--59ff--a831--a7eb6df6d5c2', 'dm-uuid-LVM-K0PDPI4eASPQXfjB6Qa1kDA6gSTSFdCfwq1XGiLdA2E0nTHZl08q1XXALebICKB1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588048 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c75d7215--6866--5647--89df--878c4666c32d-osd--block--c75d7215--6866--5647--89df--878c4666c32d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YU1Dvu-xG3I-AwmX-XQC5-6YUC-aBPC-2Y3aoD', 'scsi-0QEMU_QEMU_HARDDISK_adddc9ff-e41b-477e-a261-fe5fa77d3a0f', 'scsi-SQEMU_QEMU_HARDDISK_adddc9ff-e41b-477e-a261-fe5fa77d3a0f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588072 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588097 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b93a97a3--21ec--5dc9--a656--27e3bfc6d1b0-osd--block--b93a97a3--21ec--5dc9--a656--27e3bfc6d1b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jUBdC2-LpG7-omzw-GYkc-VKfE-4FdU-CFyZep', 'scsi-0QEMU_QEMU_HARDDISK_93b11a5e-f517-4b3c-9813-3ed2f0fa6238', 'scsi-SQEMU_QEMU_HARDDISK_93b11a5e-f517-4b3c-9813-3ed2f0fa6238'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588108 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588124 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53ba9bad-d72e-4bb6-9573-8eecfdb7d8b6', 'scsi-SQEMU_QEMU_HARDDISK_53ba9bad-d72e-4bb6-9573-8eecfdb7d8b6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588134 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588149 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588165 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588176 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588186 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588201 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588211 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588570 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part1', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part14', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part15', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part16', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588596 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ac676d1d--4f4c--546f--a12f--f85171bcd1d7-osd--block--ac676d1d--4f4c--546f--a12f--f85171bcd1d7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Qtr002-FGlN-pk9H-NbNC-e6y9-NFqg-3tsncr', 'scsi-0QEMU_QEMU_HARDDISK_b4727c68-ff73-4ff9-aa8c-694157ecb2dd', 'scsi-SQEMU_QEMU_HARDDISK_b4727c68-ff73-4ff9-aa8c-694157ecb2dd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588618 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ffd16df6--6207--59ff--a831--a7eb6df6d5c2-osd--block--ffd16df6--6207--59ff--a831--a7eb6df6d5c2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0B1f4w-AsFN-VTXc-1xv7-VN32-2REQ-2o6M9o', 'scsi-0QEMU_QEMU_HARDDISK_39dbe9ae-8bf0-4e12-9ca8-c59aebdbd1f7', 'scsi-SQEMU_QEMU_HARDDISK_39dbe9ae-8bf0-4e12-9ca8-c59aebdbd1f7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588632 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3322ab10-28f2-47f3-9821-bfcea3cb9d1d', 'scsi-SQEMU_QEMU_HARDDISK_3322ab10-28f2-47f3-9821-bfcea3cb9d1d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588650 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588661 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.588671 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9d0af248--3195--52cb--bed6--977ad9e4ee39-osd--block--9d0af248--3195--52cb--bed6--977ad9e4ee39', 'dm-uuid-LVM-dTXFflCdQ7PBCUHBj3A63R0WdXnAsDdED3r94jEdLUDrw7CrZG4kzyYjPZEyfmxk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588688 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6e702043--5e82--5f33--ad25--d539496f9fd1-osd--block--6e702043--5e82--5f33--ad25--d539496f9fd1', 'dm-uuid-LVM-YIFZjCsRr7JIF9aCqwtdyN5XmPO2pj6JRCAnTvD3ltEse3AM0y6TFaBey5rpAVXi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588698 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588708 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588722 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588738 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588748 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.588759 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588774 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588784 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588795 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588804 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588826 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588846 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588856 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588867 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588885 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9d0af248--3195--52cb--bed6--977ad9e4ee39-osd--block--9d0af248--3195--52cb--bed6--977ad9e4ee39'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HE7Sp2-tIYZ-dcwg-7eMf-hWHx-qJLn-ck38ib', 'scsi-0QEMU_QEMU_HARDDISK_14764732-c430-42d5-be90-4134a981fa59', 'scsi-SQEMU_QEMU_HARDDISK_14764732-c430-42d5-be90-4134a981fa59'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588896 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588911 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588921 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588931 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588945 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6e702043--5e82--5f33--ad25--d539496f9fd1-osd--block--6e702043--5e82--5f33--ad25--d539496f9fd1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ag9xcB-1iLg-l4WH-1JOO-W30A-gWpl-0b8RtB', 'scsi-0QEMU_QEMU_HARDDISK_02d4d70c-9632-40cc-9453-c0d53d6148ed', 'scsi-SQEMU_QEMU_HARDDISK_02d4d70c-9632-40cc-9453-c0d53d6148ed'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.588985 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace', 'scsi-SQEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace-part1', 'scsi-SQEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace-part14', 'scsi-SQEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace-part15', 'scsi-SQEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace-part16', 'scsi-SQEMU_QEMU_HARDDISK_1681d7ca-7745-4fd2-bcb7-23c40da03ace-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589003 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29dd875d-2efb-4f11-ac43-6353645f7e36', 'scsi-SQEMU_QEMU_HARDDISK_29dd875d-2efb-4f11-ac43-6353645f7e36'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589014 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589033 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589049 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.589060 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589070 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589080 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589090 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589100 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589114 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589130 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589145 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589156 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07', 'scsi-SQEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07-part1', 'scsi-SQEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07-part14', 'scsi-SQEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07-part15', 'scsi-SQEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07-part16', 'scsi-SQEMU_QEMU_HARDDISK_09613f79-b5a9-459c-8665-9206125e2c07-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589167 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.589181 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589196 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.589212 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589223 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589233 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589243 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589253 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589267 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589287 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589327 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589338 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7', 'scsi-SQEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7-part1', 'scsi-SQEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7-part14', 'scsi-SQEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7-part15', 'scsi-SQEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7-part16', 'scsi-SQEMU_QEMU_HARDDISK_cb314818-0d8c-4ce7-852a-bbdb7b6af0f7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589355 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:40:26.589371 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.589381 | orchestrator | 2025-09-19 11:40:26.589391 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-19 11:40:26.589402 | orchestrator | Friday 19 September 2025 11:30:11 +0000 (0:00:02.822) 0:00:33.048 ****** 2025-09-19 11:40:26.589417 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.589427 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.589437 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.589446 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.589456 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.589465 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.589475 | orchestrator | 2025-09-19 11:40:26.589485 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-19 11:40:26.589494 | orchestrator | Friday 19 September 2025 11:30:13 +0000 (0:00:02.009) 0:00:35.058 ****** 2025-09-19 11:40:26.589504 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.589513 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.589523 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.589532 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.589542 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.589551 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.589561 | orchestrator | 2025-09-19 11:40:26.589571 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 11:40:26.589581 | orchestrator | Friday 19 September 2025 11:30:13 +0000 (0:00:00.555) 0:00:35.613 ****** 2025-09-19 11:40:26.589591 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.589600 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.589610 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.589620 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.589629 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.589639 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.589648 | orchestrator | 2025-09-19 11:40:26.589658 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 11:40:26.589668 | orchestrator | Friday 19 September 2025 11:30:14 +0000 (0:00:00.812) 0:00:36.425 ****** 2025-09-19 11:40:26.589677 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.589687 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.589696 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.589706 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.589715 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.589725 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.589735 | orchestrator | 2025-09-19 11:40:26.589744 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 11:40:26.589754 | orchestrator | Friday 19 September 2025 11:30:16 +0000 (0:00:01.354) 0:00:37.779 ****** 2025-09-19 11:40:26.589764 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.589773 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.589783 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.589793 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.589802 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.589812 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.589821 | orchestrator | 2025-09-19 11:40:26.589831 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 11:40:26.589841 | orchestrator | Friday 19 September 2025 11:30:17 +0000 (0:00:01.283) 0:00:39.063 ****** 2025-09-19 11:40:26.589850 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.589860 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.589870 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.589888 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.589897 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.589907 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.589916 | orchestrator | 2025-09-19 11:40:26.589926 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-19 11:40:26.589936 | orchestrator | Friday 19 September 2025 11:30:18 +0000 (0:00:00.663) 0:00:39.726 ****** 2025-09-19 11:40:26.589945 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-19 11:40:26.589955 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-19 11:40:26.589964 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-19 11:40:26.589974 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 11:40:26.589984 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-19 11:40:26.589993 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-19 11:40:26.590003 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-19 11:40:26.590013 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-19 11:40:26.590061 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-19 11:40:26.590071 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-19 11:40:26.590080 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-19 11:40:26.590090 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-19 11:40:26.590099 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-19 11:40:26.590108 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-19 11:40:26.590118 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-19 11:40:26.590127 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-19 11:40:26.590137 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-19 11:40:26.590146 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-19 11:40:26.590156 | orchestrator | 2025-09-19 11:40:26.590165 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-19 11:40:26.590175 | orchestrator | Friday 19 September 2025 11:30:21 +0000 (0:00:03.606) 0:00:43.333 ****** 2025-09-19 11:40:26.590189 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 11:40:26.590199 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 11:40:26.590208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 11:40:26.590218 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.590227 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 11:40:26.590237 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 11:40:26.590246 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 11:40:26.590255 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.590265 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 11:40:26.590274 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 11:40:26.590296 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 11:40:26.590322 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.590332 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 11:40:26.590342 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 11:40:26.590351 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 11:40:26.590361 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.590370 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-19 11:40:26.590380 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-19 11:40:26.590389 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-19 11:40:26.590399 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.590408 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-19 11:40:26.590424 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-19 11:40:26.590434 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-19 11:40:26.590443 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.590453 | orchestrator | 2025-09-19 11:40:26.590462 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-19 11:40:26.590472 | orchestrator | Friday 19 September 2025 11:30:22 +0000 (0:00:01.145) 0:00:44.479 ****** 2025-09-19 11:40:26.590481 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.590491 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.590500 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.590510 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.590520 | orchestrator | 2025-09-19 11:40:26.590529 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 11:40:26.590539 | orchestrator | Friday 19 September 2025 11:30:23 +0000 (0:00:01.126) 0:00:45.606 ****** 2025-09-19 11:40:26.590549 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.590558 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.590568 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.590577 | orchestrator | 2025-09-19 11:40:26.590587 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 11:40:26.590596 | orchestrator | Friday 19 September 2025 11:30:24 +0000 (0:00:00.435) 0:00:46.041 ****** 2025-09-19 11:40:26.590606 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.590615 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.590625 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.590635 | orchestrator | 2025-09-19 11:40:26.590644 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 11:40:26.590654 | orchestrator | Friday 19 September 2025 11:30:25 +0000 (0:00:00.591) 0:00:46.632 ****** 2025-09-19 11:40:26.590663 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.590672 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.590682 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.590691 | orchestrator | 2025-09-19 11:40:26.590701 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 11:40:26.590710 | orchestrator | Friday 19 September 2025 11:30:25 +0000 (0:00:00.789) 0:00:47.421 ****** 2025-09-19 11:40:26.590720 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.590729 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.590739 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.590749 | orchestrator | 2025-09-19 11:40:26.590758 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 11:40:26.590768 | orchestrator | Friday 19 September 2025 11:30:26 +0000 (0:00:00.736) 0:00:48.157 ****** 2025-09-19 11:40:26.590777 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:40:26.590787 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:40:26.590796 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:40:26.590806 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.590815 | orchestrator | 2025-09-19 11:40:26.590825 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 11:40:26.590834 | orchestrator | Friday 19 September 2025 11:30:27 +0000 (0:00:00.555) 0:00:48.713 ****** 2025-09-19 11:40:26.590844 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:40:26.590854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:40:26.590863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:40:26.590873 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.590882 | orchestrator | 2025-09-19 11:40:26.590892 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 11:40:26.590901 | orchestrator | Friday 19 September 2025 11:30:27 +0000 (0:00:00.434) 0:00:49.148 ****** 2025-09-19 11:40:26.590916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:40:26.590925 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:40:26.590935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:40:26.590945 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.590954 | orchestrator | 2025-09-19 11:40:26.590968 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 11:40:26.590978 | orchestrator | Friday 19 September 2025 11:30:27 +0000 (0:00:00.394) 0:00:49.543 ****** 2025-09-19 11:40:26.590988 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.590997 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.591007 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.591016 | orchestrator | 2025-09-19 11:40:26.591026 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 11:40:26.591035 | orchestrator | Friday 19 September 2025 11:30:28 +0000 (0:00:00.555) 0:00:50.098 ****** 2025-09-19 11:40:26.591045 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 11:40:26.591054 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 11:40:26.591064 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 11:40:26.591073 | orchestrator | 2025-09-19 11:40:26.591088 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-19 11:40:26.591098 | orchestrator | Friday 19 September 2025 11:30:29 +0000 (0:00:00.737) 0:00:50.835 ****** 2025-09-19 11:40:26.591108 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 11:40:26.591117 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:40:26.591127 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:40:26.591137 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 11:40:26.591147 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 11:40:26.591156 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 11:40:26.591166 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 11:40:26.591175 | orchestrator | 2025-09-19 11:40:26.591185 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-19 11:40:26.591195 | orchestrator | Friday 19 September 2025 11:30:29 +0000 (0:00:00.705) 0:00:51.540 ****** 2025-09-19 11:40:26.591204 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 11:40:26.591214 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:40:26.591223 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:40:26.591233 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 11:40:26.591242 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 11:40:26.591252 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 11:40:26.591262 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 11:40:26.591271 | orchestrator | 2025-09-19 11:40:26.591281 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 11:40:26.591290 | orchestrator | Friday 19 September 2025 11:30:31 +0000 (0:00:01.960) 0:00:53.501 ****** 2025-09-19 11:40:26.591313 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.591323 | orchestrator | 2025-09-19 11:40:26.591332 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 11:40:26.591342 | orchestrator | Friday 19 September 2025 11:30:33 +0000 (0:00:01.417) 0:00:54.919 ****** 2025-09-19 11:40:26.591358 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.591368 | orchestrator | 2025-09-19 11:40:26.591377 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 11:40:26.591387 | orchestrator | Friday 19 September 2025 11:30:34 +0000 (0:00:01.242) 0:00:56.161 ****** 2025-09-19 11:40:26.591396 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.591406 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.591416 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.591425 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.591435 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.591444 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.591454 | orchestrator | 2025-09-19 11:40:26.591464 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 11:40:26.591473 | orchestrator | Friday 19 September 2025 11:30:35 +0000 (0:00:01.175) 0:00:57.337 ****** 2025-09-19 11:40:26.591483 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.591492 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.591502 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.591511 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.591521 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.591530 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.591540 | orchestrator | 2025-09-19 11:40:26.591550 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 11:40:26.591559 | orchestrator | Friday 19 September 2025 11:30:36 +0000 (0:00:01.013) 0:00:58.350 ****** 2025-09-19 11:40:26.591569 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.591578 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.591588 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.591597 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.591607 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.591616 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.591626 | orchestrator | 2025-09-19 11:40:26.591635 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 11:40:26.591645 | orchestrator | Friday 19 September 2025 11:30:37 +0000 (0:00:00.878) 0:00:59.229 ****** 2025-09-19 11:40:26.591654 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.591664 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.591677 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.591687 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.591696 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.591706 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.591715 | orchestrator | 2025-09-19 11:40:26.591725 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 11:40:26.591735 | orchestrator | Friday 19 September 2025 11:30:38 +0000 (0:00:00.643) 0:00:59.872 ****** 2025-09-19 11:40:26.591744 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.591754 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.591763 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.591773 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.591782 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.591792 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.591801 | orchestrator | 2025-09-19 11:40:26.591811 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 11:40:26.591826 | orchestrator | Friday 19 September 2025 11:30:39 +0000 (0:00:01.160) 0:01:01.032 ****** 2025-09-19 11:40:26.591836 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.591846 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.591855 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.591865 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.591874 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.591884 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.591893 | orchestrator | 2025-09-19 11:40:26.591908 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 11:40:26.591918 | orchestrator | Friday 19 September 2025 11:30:40 +0000 (0:00:01.020) 0:01:02.053 ****** 2025-09-19 11:40:26.591927 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.591937 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.591946 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.591956 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.591966 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.591975 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.591985 | orchestrator | 2025-09-19 11:40:26.591994 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 11:40:26.592004 | orchestrator | Friday 19 September 2025 11:30:40 +0000 (0:00:00.552) 0:01:02.606 ****** 2025-09-19 11:40:26.592013 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.592023 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.592032 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.592042 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.592051 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.592061 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.592070 | orchestrator | 2025-09-19 11:40:26.592080 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 11:40:26.592089 | orchestrator | Friday 19 September 2025 11:30:43 +0000 (0:00:02.208) 0:01:04.815 ****** 2025-09-19 11:40:26.592099 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.592108 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.592118 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.592127 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.592136 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.592146 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.592155 | orchestrator | 2025-09-19 11:40:26.592165 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 11:40:26.592174 | orchestrator | Friday 19 September 2025 11:30:44 +0000 (0:00:01.093) 0:01:05.908 ****** 2025-09-19 11:40:26.592184 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.592193 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.592203 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.592212 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.592221 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.592231 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.592240 | orchestrator | 2025-09-19 11:40:26.592253 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 11:40:26.592270 | orchestrator | Friday 19 September 2025 11:30:44 +0000 (0:00:00.655) 0:01:06.563 ****** 2025-09-19 11:40:26.592287 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.592321 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.592337 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.592354 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.592371 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.592387 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.592403 | orchestrator | 2025-09-19 11:40:26.592414 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 11:40:26.592423 | orchestrator | Friday 19 September 2025 11:30:45 +0000 (0:00:00.671) 0:01:07.235 ****** 2025-09-19 11:40:26.592433 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.592442 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.592451 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.592461 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.592470 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.592480 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.592490 | orchestrator | 2025-09-19 11:40:26.592499 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 11:40:26.592509 | orchestrator | Friday 19 September 2025 11:30:46 +0000 (0:00:00.890) 0:01:08.126 ****** 2025-09-19 11:40:26.592519 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.592528 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.592544 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.592554 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.592563 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.592573 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.592583 | orchestrator | 2025-09-19 11:40:26.592592 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 11:40:26.592602 | orchestrator | Friday 19 September 2025 11:30:47 +0000 (0:00:00.543) 0:01:08.669 ****** 2025-09-19 11:40:26.592611 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.592621 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.592630 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.592640 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.592649 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.592659 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.592668 | orchestrator | 2025-09-19 11:40:26.592678 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 11:40:26.592688 | orchestrator | Friday 19 September 2025 11:30:47 +0000 (0:00:00.642) 0:01:09.311 ****** 2025-09-19 11:40:26.592697 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.592712 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.592722 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.592731 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.592741 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.592750 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.592760 | orchestrator | 2025-09-19 11:40:26.592769 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 11:40:26.592779 | orchestrator | Friday 19 September 2025 11:30:48 +0000 (0:00:00.492) 0:01:09.804 ****** 2025-09-19 11:40:26.592789 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.592798 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.592808 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.592818 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.592827 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.592837 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.592846 | orchestrator | 2025-09-19 11:40:26.592863 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 11:40:26.592873 | orchestrator | Friday 19 September 2025 11:30:48 +0000 (0:00:00.693) 0:01:10.497 ****** 2025-09-19 11:40:26.592882 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.592892 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.592902 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.592911 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.592921 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.592931 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.592940 | orchestrator | 2025-09-19 11:40:26.592950 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 11:40:26.592960 | orchestrator | Friday 19 September 2025 11:30:49 +0000 (0:00:00.605) 0:01:11.103 ****** 2025-09-19 11:40:26.592969 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.592979 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.592988 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.592998 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.593007 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.593017 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.593026 | orchestrator | 2025-09-19 11:40:26.593036 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 11:40:26.593046 | orchestrator | Friday 19 September 2025 11:30:50 +0000 (0:00:00.794) 0:01:11.898 ****** 2025-09-19 11:40:26.593056 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.593065 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.593075 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.593084 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.593093 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.593103 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.593118 | orchestrator | 2025-09-19 11:40:26.593128 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-19 11:40:26.593138 | orchestrator | Friday 19 September 2025 11:30:51 +0000 (0:00:01.538) 0:01:13.436 ****** 2025-09-19 11:40:26.593147 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.593157 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.593166 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.593176 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.593186 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.593195 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.593205 | orchestrator | 2025-09-19 11:40:26.593214 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-19 11:40:26.593224 | orchestrator | Friday 19 September 2025 11:30:53 +0000 (0:00:01.837) 0:01:15.274 ****** 2025-09-19 11:40:26.593234 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.593249 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.593266 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.593281 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.593346 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.593364 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.593374 | orchestrator | 2025-09-19 11:40:26.593383 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-19 11:40:26.593393 | orchestrator | Friday 19 September 2025 11:30:55 +0000 (0:00:02.252) 0:01:17.526 ****** 2025-09-19 11:40:26.593403 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.593413 | orchestrator | 2025-09-19 11:40:26.593423 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-19 11:40:26.593431 | orchestrator | Friday 19 September 2025 11:30:57 +0000 (0:00:01.138) 0:01:18.664 ****** 2025-09-19 11:40:26.593439 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.593447 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.593455 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.593463 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.593471 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.593478 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.593486 | orchestrator | 2025-09-19 11:40:26.593494 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-19 11:40:26.593502 | orchestrator | Friday 19 September 2025 11:30:57 +0000 (0:00:00.620) 0:01:19.285 ****** 2025-09-19 11:40:26.593510 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.593518 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.593526 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.593534 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.593542 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.593549 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.593557 | orchestrator | 2025-09-19 11:40:26.593565 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-19 11:40:26.593573 | orchestrator | Friday 19 September 2025 11:30:58 +0000 (0:00:00.799) 0:01:20.084 ****** 2025-09-19 11:40:26.593581 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 11:40:26.593589 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 11:40:26.593597 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 11:40:26.593605 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 11:40:26.593617 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 11:40:26.593625 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 11:40:26.593633 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 11:40:26.593646 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 11:40:26.593654 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 11:40:26.593662 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 11:40:26.593670 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 11:40:26.593683 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 11:40:26.593691 | orchestrator | 2025-09-19 11:40:26.593699 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-19 11:40:26.593707 | orchestrator | Friday 19 September 2025 11:30:59 +0000 (0:00:01.388) 0:01:21.472 ****** 2025-09-19 11:40:26.593715 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.593723 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.593731 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.593739 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.593746 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.593754 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.593762 | orchestrator | 2025-09-19 11:40:26.593770 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-19 11:40:26.593778 | orchestrator | Friday 19 September 2025 11:31:01 +0000 (0:00:01.183) 0:01:22.656 ****** 2025-09-19 11:40:26.593786 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.593794 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.593801 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.593809 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.593817 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.593825 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.593832 | orchestrator | 2025-09-19 11:40:26.593840 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-19 11:40:26.593849 | orchestrator | Friday 19 September 2025 11:31:01 +0000 (0:00:00.614) 0:01:23.271 ****** 2025-09-19 11:40:26.593862 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.593874 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.593887 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.593900 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.593912 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.593925 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.593938 | orchestrator | 2025-09-19 11:40:26.593951 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-19 11:40:26.593964 | orchestrator | Friday 19 September 2025 11:31:02 +0000 (0:00:00.835) 0:01:24.106 ****** 2025-09-19 11:40:26.593978 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.593990 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.594004 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.594045 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.594064 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.594078 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.594091 | orchestrator | 2025-09-19 11:40:26.594099 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-19 11:40:26.594107 | orchestrator | Friday 19 September 2025 11:31:03 +0000 (0:00:00.582) 0:01:24.688 ****** 2025-09-19 11:40:26.594115 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.594123 | orchestrator | 2025-09-19 11:40:26.594131 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-19 11:40:26.594139 | orchestrator | Friday 19 September 2025 11:31:04 +0000 (0:00:01.262) 0:01:25.950 ****** 2025-09-19 11:40:26.594146 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.594154 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.594162 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.594178 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.594185 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.594193 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.594201 | orchestrator | 2025-09-19 11:40:26.594208 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-19 11:40:26.594216 | orchestrator | Friday 19 September 2025 11:31:46 +0000 (0:00:42.228) 0:02:08.179 ****** 2025-09-19 11:40:26.594224 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 11:40:26.594232 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 11:40:26.594242 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 11:40:26.594256 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.594269 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 11:40:26.594282 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 11:40:26.594295 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 11:40:26.594323 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.594335 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 11:40:26.594347 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 11:40:26.594359 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 11:40:26.594372 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.594386 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 11:40:26.594405 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 11:40:26.594419 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 11:40:26.594433 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.594447 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 11:40:26.594460 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 11:40:26.594474 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 11:40:26.594488 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.594501 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 11:40:26.594523 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 11:40:26.594531 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 11:40:26.594539 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.594546 | orchestrator | 2025-09-19 11:40:26.594554 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-19 11:40:26.594562 | orchestrator | Friday 19 September 2025 11:31:47 +0000 (0:00:00.682) 0:02:08.861 ****** 2025-09-19 11:40:26.594569 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.594577 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.594585 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.594593 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.594601 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.594608 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.594616 | orchestrator | 2025-09-19 11:40:26.594624 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-19 11:40:26.594632 | orchestrator | Friday 19 September 2025 11:31:48 +0000 (0:00:00.804) 0:02:09.666 ****** 2025-09-19 11:40:26.594639 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.594647 | orchestrator | 2025-09-19 11:40:26.594655 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-19 11:40:26.594663 | orchestrator | Friday 19 September 2025 11:31:48 +0000 (0:00:00.162) 0:02:09.828 ****** 2025-09-19 11:40:26.594680 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.594688 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.594696 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.594703 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.594711 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.594719 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.594726 | orchestrator | 2025-09-19 11:40:26.594734 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-19 11:40:26.594742 | orchestrator | Friday 19 September 2025 11:31:48 +0000 (0:00:00.741) 0:02:10.570 ****** 2025-09-19 11:40:26.594750 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.594757 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.594765 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.594773 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.594780 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.594788 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.594796 | orchestrator | 2025-09-19 11:40:26.594804 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-19 11:40:26.594811 | orchestrator | Friday 19 September 2025 11:31:49 +0000 (0:00:00.879) 0:02:11.450 ****** 2025-09-19 11:40:26.594819 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.594827 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.594835 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.594842 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.594850 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.594858 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.594865 | orchestrator | 2025-09-19 11:40:26.594873 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-19 11:40:26.594881 | orchestrator | Friday 19 September 2025 11:31:50 +0000 (0:00:00.689) 0:02:12.139 ****** 2025-09-19 11:40:26.594889 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.594897 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.594904 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.594912 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.594920 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.594928 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.594935 | orchestrator | 2025-09-19 11:40:26.594943 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-19 11:40:26.594951 | orchestrator | Friday 19 September 2025 11:31:53 +0000 (0:00:02.550) 0:02:14.690 ****** 2025-09-19 11:40:26.594959 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.594967 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.594974 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.594982 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.594990 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.594997 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.595005 | orchestrator | 2025-09-19 11:40:26.595013 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-19 11:40:26.595021 | orchestrator | Friday 19 September 2025 11:31:53 +0000 (0:00:00.642) 0:02:15.332 ****** 2025-09-19 11:40:26.595028 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.595036 | orchestrator | 2025-09-19 11:40:26.595044 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-19 11:40:26.595052 | orchestrator | Friday 19 September 2025 11:31:54 +0000 (0:00:01.078) 0:02:16.410 ****** 2025-09-19 11:40:26.595060 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.595068 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.595075 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.595083 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.595091 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.595098 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.595106 | orchestrator | 2025-09-19 11:40:26.595114 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-19 11:40:26.595126 | orchestrator | Friday 19 September 2025 11:31:55 +0000 (0:00:00.606) 0:02:17.017 ****** 2025-09-19 11:40:26.595134 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.595142 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.595149 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.595157 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.595165 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.595172 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.595180 | orchestrator | 2025-09-19 11:40:26.595188 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-19 11:40:26.595196 | orchestrator | Friday 19 September 2025 11:31:56 +0000 (0:00:00.735) 0:02:17.752 ****** 2025-09-19 11:40:26.595203 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.595211 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.595219 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.595227 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.595235 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.595253 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.595262 | orchestrator | 2025-09-19 11:40:26.595270 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-19 11:40:26.595278 | orchestrator | Friday 19 September 2025 11:31:56 +0000 (0:00:00.549) 0:02:18.302 ****** 2025-09-19 11:40:26.595286 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.595293 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.595316 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.595324 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.595332 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.595340 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.595347 | orchestrator | 2025-09-19 11:40:26.595355 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-19 11:40:26.595363 | orchestrator | Friday 19 September 2025 11:31:57 +0000 (0:00:00.614) 0:02:18.916 ****** 2025-09-19 11:40:26.595371 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.595379 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.595387 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.595394 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.595402 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.595410 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.595418 | orchestrator | 2025-09-19 11:40:26.595426 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-19 11:40:26.595433 | orchestrator | Friday 19 September 2025 11:31:57 +0000 (0:00:00.531) 0:02:19.448 ****** 2025-09-19 11:40:26.595441 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.595497 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.595512 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.595520 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.595528 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.595536 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.595544 | orchestrator | 2025-09-19 11:40:26.595552 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-19 11:40:26.595560 | orchestrator | Friday 19 September 2025 11:31:58 +0000 (0:00:00.715) 0:02:20.163 ****** 2025-09-19 11:40:26.595568 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.595576 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.595583 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.595591 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.595599 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.595606 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.595614 | orchestrator | 2025-09-19 11:40:26.595622 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-19 11:40:26.595630 | orchestrator | Friday 19 September 2025 11:31:59 +0000 (0:00:00.551) 0:02:20.714 ****** 2025-09-19 11:40:26.595643 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.595651 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.595659 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.595666 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.595674 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.595682 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.595690 | orchestrator | 2025-09-19 11:40:26.595697 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-19 11:40:26.595705 | orchestrator | Friday 19 September 2025 11:31:59 +0000 (0:00:00.639) 0:02:21.354 ****** 2025-09-19 11:40:26.595713 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.595721 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.595729 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.595737 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.595744 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.595752 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.595760 | orchestrator | 2025-09-19 11:40:26.595768 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-19 11:40:26.595775 | orchestrator | Friday 19 September 2025 11:32:00 +0000 (0:00:01.188) 0:02:22.543 ****** 2025-09-19 11:40:26.595784 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.595792 | orchestrator | 2025-09-19 11:40:26.595800 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-19 11:40:26.595807 | orchestrator | Friday 19 September 2025 11:32:01 +0000 (0:00:01.032) 0:02:23.575 ****** 2025-09-19 11:40:26.595815 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-19 11:40:26.595823 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-19 11:40:26.595831 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-19 11:40:26.595839 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-19 11:40:26.595847 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-19 11:40:26.595855 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-19 11:40:26.595863 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-19 11:40:26.595870 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-19 11:40:26.595878 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-19 11:40:26.595886 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-19 11:40:26.595894 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-19 11:40:26.595905 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-19 11:40:26.595913 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-19 11:40:26.595921 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-19 11:40:26.595929 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-19 11:40:26.595937 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-19 11:40:26.595945 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-19 11:40:26.595952 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-19 11:40:26.595960 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-19 11:40:26.595968 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-19 11:40:26.595982 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-19 11:40:26.595990 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-19 11:40:26.595998 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-19 11:40:26.596006 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-19 11:40:26.596014 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-19 11:40:26.596022 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-19 11:40:26.596035 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-19 11:40:26.596043 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-19 11:40:26.596051 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-19 11:40:26.596059 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-19 11:40:26.596067 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-19 11:40:26.596074 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-19 11:40:26.596082 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-19 11:40:26.596090 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-19 11:40:26.596098 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-19 11:40:26.596106 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-19 11:40:26.596113 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-19 11:40:26.596121 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-19 11:40:26.596129 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-19 11:40:26.596137 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-19 11:40:26.596145 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-19 11:40:26.596153 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-19 11:40:26.596161 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-19 11:40:26.596168 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-19 11:40:26.596176 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-19 11:40:26.596184 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-19 11:40:26.596192 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-19 11:40:26.596200 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 11:40:26.596207 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 11:40:26.596215 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 11:40:26.596223 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 11:40:26.596231 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 11:40:26.596239 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-19 11:40:26.596247 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 11:40:26.596254 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 11:40:26.596262 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 11:40:26.596270 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 11:40:26.596278 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 11:40:26.596285 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 11:40:26.596293 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 11:40:26.596340 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 11:40:26.596348 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 11:40:26.596356 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 11:40:26.596364 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 11:40:26.596372 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 11:40:26.596380 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 11:40:26.596387 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 11:40:26.596400 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 11:40:26.596408 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 11:40:26.596416 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 11:40:26.596427 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 11:40:26.596435 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 11:40:26.596443 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 11:40:26.596451 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 11:40:26.596459 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 11:40:26.596467 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 11:40:26.596475 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 11:40:26.596482 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 11:40:26.596495 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 11:40:26.596503 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 11:40:26.596511 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 11:40:26.596519 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-19 11:40:26.596527 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-19 11:40:26.596535 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 11:40:26.596543 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 11:40:26.596551 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-19 11:40:26.596559 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-19 11:40:26.596567 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-19 11:40:26.596575 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-19 11:40:26.596583 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-19 11:40:26.596591 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 11:40:26.596599 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-19 11:40:26.596607 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-19 11:40:26.596615 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-19 11:40:26.596623 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-19 11:40:26.596631 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-19 11:40:26.596638 | orchestrator | 2025-09-19 11:40:26.596646 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-19 11:40:26.596654 | orchestrator | Friday 19 September 2025 11:32:08 +0000 (0:00:06.977) 0:02:30.553 ****** 2025-09-19 11:40:26.596662 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.596670 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.596678 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.596686 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.596694 | orchestrator | 2025-09-19 11:40:26.596702 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-19 11:40:26.596710 | orchestrator | Friday 19 September 2025 11:32:10 +0000 (0:00:01.147) 0:02:31.701 ****** 2025-09-19 11:40:26.596719 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 11:40:26.596727 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 11:40:26.596743 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 11:40:26.596751 | orchestrator | 2025-09-19 11:40:26.596759 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-19 11:40:26.596767 | orchestrator | Friday 19 September 2025 11:32:10 +0000 (0:00:00.902) 0:02:32.603 ****** 2025-09-19 11:40:26.596775 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 11:40:26.596783 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 11:40:26.596791 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 11:40:26.596800 | orchestrator | 2025-09-19 11:40:26.596807 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-19 11:40:26.596816 | orchestrator | Friday 19 September 2025 11:32:12 +0000 (0:00:01.623) 0:02:34.226 ****** 2025-09-19 11:40:26.596824 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.596831 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.596839 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.596847 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.596855 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.596863 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.596869 | orchestrator | 2025-09-19 11:40:26.596876 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-19 11:40:26.596883 | orchestrator | Friday 19 September 2025 11:32:13 +0000 (0:00:00.691) 0:02:34.918 ****** 2025-09-19 11:40:26.596889 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.596896 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.596903 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.596909 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.596916 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.596925 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.596932 | orchestrator | 2025-09-19 11:40:26.596939 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-19 11:40:26.596946 | orchestrator | Friday 19 September 2025 11:32:14 +0000 (0:00:00.748) 0:02:35.666 ****** 2025-09-19 11:40:26.596953 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.596959 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.596966 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.596973 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.596979 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.596986 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.596992 | orchestrator | 2025-09-19 11:40:26.596999 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-19 11:40:26.597006 | orchestrator | Friday 19 September 2025 11:32:14 +0000 (0:00:00.665) 0:02:36.331 ****** 2025-09-19 11:40:26.597016 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.597024 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.597030 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.597037 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.597044 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.597050 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.597057 | orchestrator | 2025-09-19 11:40:26.597064 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-19 11:40:26.597071 | orchestrator | Friday 19 September 2025 11:32:15 +0000 (0:00:00.573) 0:02:36.904 ****** 2025-09-19 11:40:26.597078 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.597084 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.597091 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.597097 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.597104 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.597115 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.597122 | orchestrator | 2025-09-19 11:40:26.597129 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-19 11:40:26.597136 | orchestrator | Friday 19 September 2025 11:32:16 +0000 (0:00:00.792) 0:02:37.697 ****** 2025-09-19 11:40:26.597143 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.597150 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.597156 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.597163 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.597170 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.597176 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.597183 | orchestrator | 2025-09-19 11:40:26.597190 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-19 11:40:26.597197 | orchestrator | Friday 19 September 2025 11:32:16 +0000 (0:00:00.573) 0:02:38.271 ****** 2025-09-19 11:40:26.597203 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.597210 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.597217 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.597223 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.597230 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.597237 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.597243 | orchestrator | 2025-09-19 11:40:26.597250 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-19 11:40:26.597257 | orchestrator | Friday 19 September 2025 11:32:17 +0000 (0:00:00.796) 0:02:39.068 ****** 2025-09-19 11:40:26.597264 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.597270 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.597277 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.597284 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.597290 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.597307 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.597314 | orchestrator | 2025-09-19 11:40:26.597321 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-19 11:40:26.597328 | orchestrator | Friday 19 September 2025 11:32:18 +0000 (0:00:00.705) 0:02:39.774 ****** 2025-09-19 11:40:26.597335 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.597341 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.597348 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.597355 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.597362 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.597368 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.597375 | orchestrator | 2025-09-19 11:40:26.597382 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-19 11:40:26.597389 | orchestrator | Friday 19 September 2025 11:32:21 +0000 (0:00:03.170) 0:02:42.944 ****** 2025-09-19 11:40:26.597395 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.597402 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.597409 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.597415 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.597422 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.597429 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.597435 | orchestrator | 2025-09-19 11:40:26.597442 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-19 11:40:26.597449 | orchestrator | Friday 19 September 2025 11:32:21 +0000 (0:00:00.595) 0:02:43.540 ****** 2025-09-19 11:40:26.597456 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.597462 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.597469 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.597476 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.597485 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.597496 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.597508 | orchestrator | 2025-09-19 11:40:26.597517 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-19 11:40:26.597533 | orchestrator | Friday 19 September 2025 11:32:22 +0000 (0:00:00.707) 0:02:44.247 ****** 2025-09-19 11:40:26.597543 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.597553 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.597566 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.597580 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.597595 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.597610 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.597626 | orchestrator | 2025-09-19 11:40:26.597643 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-19 11:40:26.597667 | orchestrator | Friday 19 September 2025 11:32:23 +0000 (0:00:00.547) 0:02:44.794 ****** 2025-09-19 11:40:26.597686 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 11:40:26.597705 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 11:40:26.597725 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 11:40:26.597743 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.597763 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.597783 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.597802 | orchestrator | 2025-09-19 11:40:26.597834 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-19 11:40:26.597854 | orchestrator | Friday 19 September 2025 11:32:23 +0000 (0:00:00.817) 0:02:45.612 ****** 2025-09-19 11:40:26.597874 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-19 11:40:26.597896 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-19 11:40:26.597910 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.597925 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-19 11:40:26.597939 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-19 11:40:26.597953 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.597967 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-19 11:40:26.597981 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-19 11:40:26.597994 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.598008 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.598136 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.598150 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.598161 | orchestrator | 2025-09-19 11:40:26.598173 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-19 11:40:26.598180 | orchestrator | Friday 19 September 2025 11:32:24 +0000 (0:00:00.895) 0:02:46.507 ****** 2025-09-19 11:40:26.598187 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.598193 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.598200 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.598207 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.598213 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.598220 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.598226 | orchestrator | 2025-09-19 11:40:26.598233 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-19 11:40:26.598239 | orchestrator | Friday 19 September 2025 11:32:25 +0000 (0:00:01.047) 0:02:47.555 ****** 2025-09-19 11:40:26.598246 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.598252 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.598259 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.598265 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.598272 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.598279 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.598285 | orchestrator | 2025-09-19 11:40:26.598292 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 11:40:26.598338 | orchestrator | Friday 19 September 2025 11:32:26 +0000 (0:00:00.904) 0:02:48.460 ****** 2025-09-19 11:40:26.598345 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.598352 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.598358 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.598365 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.598372 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.598378 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.598385 | orchestrator | 2025-09-19 11:40:26.598397 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 11:40:26.598404 | orchestrator | Friday 19 September 2025 11:32:27 +0000 (0:00:00.867) 0:02:49.327 ****** 2025-09-19 11:40:26.598410 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.598417 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.598424 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.598430 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.598437 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.598443 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.598450 | orchestrator | 2025-09-19 11:40:26.598457 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 11:40:26.598464 | orchestrator | Friday 19 September 2025 11:32:28 +0000 (0:00:00.694) 0:02:50.022 ****** 2025-09-19 11:40:26.598470 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.598507 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.598515 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.598522 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.598528 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.598535 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.598541 | orchestrator | 2025-09-19 11:40:26.598548 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 11:40:26.598554 | orchestrator | Friday 19 September 2025 11:32:29 +0000 (0:00:01.070) 0:02:51.093 ****** 2025-09-19 11:40:26.598561 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.598568 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.598574 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.598581 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.598587 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.598594 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.598608 | orchestrator | 2025-09-19 11:40:26.598615 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 11:40:26.598622 | orchestrator | Friday 19 September 2025 11:32:30 +0000 (0:00:00.614) 0:02:51.707 ****** 2025-09-19 11:40:26.598628 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:40:26.598635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:40:26.598642 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:40:26.598648 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.598655 | orchestrator | 2025-09-19 11:40:26.598661 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 11:40:26.598668 | orchestrator | Friday 19 September 2025 11:32:30 +0000 (0:00:00.571) 0:02:52.279 ****** 2025-09-19 11:40:26.598675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:40:26.598681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:40:26.598688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:40:26.598695 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.598701 | orchestrator | 2025-09-19 11:40:26.598708 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 11:40:26.598714 | orchestrator | Friday 19 September 2025 11:32:31 +0000 (0:00:00.674) 0:02:52.953 ****** 2025-09-19 11:40:26.598721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:40:26.598728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:40:26.598734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:40:26.598741 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.598748 | orchestrator | 2025-09-19 11:40:26.598754 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 11:40:26.598761 | orchestrator | Friday 19 September 2025 11:32:31 +0000 (0:00:00.630) 0:02:53.584 ****** 2025-09-19 11:40:26.598767 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.598774 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.598781 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.598787 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.598794 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.598800 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.598807 | orchestrator | 2025-09-19 11:40:26.598814 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 11:40:26.598820 | orchestrator | Friday 19 September 2025 11:32:32 +0000 (0:00:00.558) 0:02:54.143 ****** 2025-09-19 11:40:26.598827 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 11:40:26.598833 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 11:40:26.598840 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 11:40:26.598846 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-19 11:40:26.598853 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.598859 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-19 11:40:26.598866 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.598873 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-19 11:40:26.598879 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.598886 | orchestrator | 2025-09-19 11:40:26.598892 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-19 11:40:26.598898 | orchestrator | Friday 19 September 2025 11:32:34 +0000 (0:00:02.316) 0:02:56.459 ****** 2025-09-19 11:40:26.598904 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.598910 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.598916 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.598922 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.598928 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.598934 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.598940 | orchestrator | 2025-09-19 11:40:26.598946 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 11:40:26.598960 | orchestrator | Friday 19 September 2025 11:32:37 +0000 (0:00:02.848) 0:02:59.308 ****** 2025-09-19 11:40:26.598966 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.598972 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.598978 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.598984 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.598990 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.598996 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.599002 | orchestrator | 2025-09-19 11:40:26.599012 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-19 11:40:26.599018 | orchestrator | Friday 19 September 2025 11:32:39 +0000 (0:00:01.721) 0:03:01.030 ****** 2025-09-19 11:40:26.599024 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.599030 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.599036 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.599043 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.599049 | orchestrator | 2025-09-19 11:40:26.599055 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-19 11:40:26.599061 | orchestrator | Friday 19 September 2025 11:32:40 +0000 (0:00:01.024) 0:03:02.054 ****** 2025-09-19 11:40:26.599067 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.599073 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.599079 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.599085 | orchestrator | 2025-09-19 11:40:26.599109 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-19 11:40:26.599116 | orchestrator | Friday 19 September 2025 11:32:40 +0000 (0:00:00.365) 0:03:02.420 ****** 2025-09-19 11:40:26.599122 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.599128 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.599135 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.599141 | orchestrator | 2025-09-19 11:40:26.599147 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-19 11:40:26.599153 | orchestrator | Friday 19 September 2025 11:32:42 +0000 (0:00:01.584) 0:03:04.004 ****** 2025-09-19 11:40:26.599159 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 11:40:26.599165 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 11:40:26.599172 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 11:40:26.599178 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.599184 | orchestrator | 2025-09-19 11:40:26.599190 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-19 11:40:26.599196 | orchestrator | Friday 19 September 2025 11:32:43 +0000 (0:00:00.660) 0:03:04.664 ****** 2025-09-19 11:40:26.599202 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.599209 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.599215 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.599221 | orchestrator | 2025-09-19 11:40:26.599227 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-19 11:40:26.599233 | orchestrator | Friday 19 September 2025 11:32:43 +0000 (0:00:00.342) 0:03:05.006 ****** 2025-09-19 11:40:26.599239 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.599245 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.599251 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.599257 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.599264 | orchestrator | 2025-09-19 11:40:26.599270 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-19 11:40:26.599276 | orchestrator | Friday 19 September 2025 11:32:44 +0000 (0:00:01.129) 0:03:06.136 ****** 2025-09-19 11:40:26.599282 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:40:26.599288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:40:26.599294 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:40:26.599316 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.599322 | orchestrator | 2025-09-19 11:40:26.599329 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-19 11:40:26.599335 | orchestrator | Friday 19 September 2025 11:32:44 +0000 (0:00:00.338) 0:03:06.475 ****** 2025-09-19 11:40:26.599341 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.599347 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.599354 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.599360 | orchestrator | 2025-09-19 11:40:26.599366 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-19 11:40:26.599372 | orchestrator | Friday 19 September 2025 11:32:45 +0000 (0:00:00.391) 0:03:06.866 ****** 2025-09-19 11:40:26.599379 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.599385 | orchestrator | 2025-09-19 11:40:26.599391 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-19 11:40:26.599397 | orchestrator | Friday 19 September 2025 11:32:45 +0000 (0:00:00.202) 0:03:07.069 ****** 2025-09-19 11:40:26.599404 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.599410 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.599416 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.599422 | orchestrator | 2025-09-19 11:40:26.599429 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-19 11:40:26.599435 | orchestrator | Friday 19 September 2025 11:32:45 +0000 (0:00:00.360) 0:03:07.429 ****** 2025-09-19 11:40:26.599441 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.599447 | orchestrator | 2025-09-19 11:40:26.599453 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-19 11:40:26.599459 | orchestrator | Friday 19 September 2025 11:32:46 +0000 (0:00:00.252) 0:03:07.682 ****** 2025-09-19 11:40:26.599466 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.599472 | orchestrator | 2025-09-19 11:40:26.599478 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-19 11:40:26.599484 | orchestrator | Friday 19 September 2025 11:32:46 +0000 (0:00:00.272) 0:03:07.954 ****** 2025-09-19 11:40:26.599490 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.599497 | orchestrator | 2025-09-19 11:40:26.599503 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-19 11:40:26.599509 | orchestrator | Friday 19 September 2025 11:32:46 +0000 (0:00:00.143) 0:03:08.097 ****** 2025-09-19 11:40:26.599515 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.599521 | orchestrator | 2025-09-19 11:40:26.599527 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-19 11:40:26.599534 | orchestrator | Friday 19 September 2025 11:32:46 +0000 (0:00:00.209) 0:03:08.307 ****** 2025-09-19 11:40:26.599540 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.599546 | orchestrator | 2025-09-19 11:40:26.599555 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-19 11:40:26.599561 | orchestrator | Friday 19 September 2025 11:32:46 +0000 (0:00:00.242) 0:03:08.550 ****** 2025-09-19 11:40:26.599567 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:40:26.599574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:40:26.599580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:40:26.599586 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.599592 | orchestrator | 2025-09-19 11:40:26.599599 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-19 11:40:26.599605 | orchestrator | Friday 19 September 2025 11:32:47 +0000 (0:00:00.686) 0:03:09.237 ****** 2025-09-19 11:40:26.599611 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.599636 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.599644 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.599650 | orchestrator | 2025-09-19 11:40:26.599657 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-19 11:40:26.599667 | orchestrator | Friday 19 September 2025 11:32:48 +0000 (0:00:00.559) 0:03:09.796 ****** 2025-09-19 11:40:26.599673 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.599679 | orchestrator | 2025-09-19 11:40:26.599686 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-19 11:40:26.599692 | orchestrator | Friday 19 September 2025 11:32:48 +0000 (0:00:00.239) 0:03:10.036 ****** 2025-09-19 11:40:26.599698 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.599704 | orchestrator | 2025-09-19 11:40:26.599710 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-19 11:40:26.599716 | orchestrator | Friday 19 September 2025 11:32:48 +0000 (0:00:00.244) 0:03:10.280 ****** 2025-09-19 11:40:26.599722 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.599728 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.599735 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.599741 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.599747 | orchestrator | 2025-09-19 11:40:26.599753 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-19 11:40:26.599759 | orchestrator | Friday 19 September 2025 11:32:49 +0000 (0:00:01.006) 0:03:11.286 ****** 2025-09-19 11:40:26.599766 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.599772 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.599778 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.599784 | orchestrator | 2025-09-19 11:40:26.599790 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-19 11:40:26.599796 | orchestrator | Friday 19 September 2025 11:32:50 +0000 (0:00:00.351) 0:03:11.638 ****** 2025-09-19 11:40:26.599802 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.599809 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.599815 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.599821 | orchestrator | 2025-09-19 11:40:26.599827 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-19 11:40:26.599833 | orchestrator | Friday 19 September 2025 11:32:51 +0000 (0:00:01.522) 0:03:13.160 ****** 2025-09-19 11:40:26.599840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:40:26.599846 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:40:26.599852 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:40:26.599858 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.599864 | orchestrator | 2025-09-19 11:40:26.599870 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-19 11:40:26.599877 | orchestrator | Friday 19 September 2025 11:32:52 +0000 (0:00:00.911) 0:03:14.072 ****** 2025-09-19 11:40:26.599883 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.599889 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.599895 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.599901 | orchestrator | 2025-09-19 11:40:26.599907 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-19 11:40:26.599913 | orchestrator | Friday 19 September 2025 11:32:52 +0000 (0:00:00.393) 0:03:14.465 ****** 2025-09-19 11:40:26.599919 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.599926 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.599932 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.599938 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.599944 | orchestrator | 2025-09-19 11:40:26.599950 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-19 11:40:26.599956 | orchestrator | Friday 19 September 2025 11:32:54 +0000 (0:00:01.243) 0:03:15.709 ****** 2025-09-19 11:40:26.599962 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.599968 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.599975 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.599981 | orchestrator | 2025-09-19 11:40:26.599991 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-19 11:40:26.599997 | orchestrator | Friday 19 September 2025 11:32:54 +0000 (0:00:00.331) 0:03:16.040 ****** 2025-09-19 11:40:26.600003 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.600009 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.600015 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.600021 | orchestrator | 2025-09-19 11:40:26.600028 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-19 11:40:26.600034 | orchestrator | Friday 19 September 2025 11:32:56 +0000 (0:00:01.742) 0:03:17.782 ****** 2025-09-19 11:40:26.600040 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:40:26.600046 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:40:26.600052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:40:26.600058 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.600064 | orchestrator | 2025-09-19 11:40:26.600071 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-19 11:40:26.600080 | orchestrator | Friday 19 September 2025 11:32:57 +0000 (0:00:00.868) 0:03:18.650 ****** 2025-09-19 11:40:26.600086 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.600092 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.600098 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.600104 | orchestrator | 2025-09-19 11:40:26.600110 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-19 11:40:26.600117 | orchestrator | Friday 19 September 2025 11:32:57 +0000 (0:00:00.569) 0:03:19.219 ****** 2025-09-19 11:40:26.600123 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.600129 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.600135 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.600141 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.600147 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.600153 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.600159 | orchestrator | 2025-09-19 11:40:26.600166 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-19 11:40:26.600190 | orchestrator | Friday 19 September 2025 11:32:58 +0000 (0:00:01.060) 0:03:20.280 ****** 2025-09-19 11:40:26.600198 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.600204 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.600210 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.600216 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.600222 | orchestrator | 2025-09-19 11:40:26.600229 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-19 11:40:26.600235 | orchestrator | Friday 19 September 2025 11:32:59 +0000 (0:00:01.023) 0:03:21.303 ****** 2025-09-19 11:40:26.600241 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.600247 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.600253 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.600259 | orchestrator | 2025-09-19 11:40:26.600265 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-19 11:40:26.600272 | orchestrator | Friday 19 September 2025 11:33:00 +0000 (0:00:00.344) 0:03:21.648 ****** 2025-09-19 11:40:26.600278 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.600284 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.600290 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.600305 | orchestrator | 2025-09-19 11:40:26.600311 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-19 11:40:26.600317 | orchestrator | Friday 19 September 2025 11:33:01 +0000 (0:00:01.536) 0:03:23.185 ****** 2025-09-19 11:40:26.600323 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 11:40:26.600329 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 11:40:26.600336 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 11:40:26.600347 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.600353 | orchestrator | 2025-09-19 11:40:26.600359 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-19 11:40:26.600365 | orchestrator | Friday 19 September 2025 11:33:02 +0000 (0:00:00.475) 0:03:23.660 ****** 2025-09-19 11:40:26.600372 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.600377 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.600383 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.600390 | orchestrator | 2025-09-19 11:40:26.600396 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-19 11:40:26.600402 | orchestrator | 2025-09-19 11:40:26.600408 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 11:40:26.600414 | orchestrator | Friday 19 September 2025 11:33:02 +0000 (0:00:00.575) 0:03:24.236 ****** 2025-09-19 11:40:26.600420 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.600426 | orchestrator | 2025-09-19 11:40:26.600433 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 11:40:26.600439 | orchestrator | Friday 19 September 2025 11:33:03 +0000 (0:00:00.664) 0:03:24.901 ****** 2025-09-19 11:40:26.600445 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.600451 | orchestrator | 2025-09-19 11:40:26.600457 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 11:40:26.600463 | orchestrator | Friday 19 September 2025 11:33:03 +0000 (0:00:00.602) 0:03:25.504 ****** 2025-09-19 11:40:26.600469 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.600475 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.600481 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.600487 | orchestrator | 2025-09-19 11:40:26.600493 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 11:40:26.600499 | orchestrator | Friday 19 September 2025 11:33:04 +0000 (0:00:00.756) 0:03:26.261 ****** 2025-09-19 11:40:26.600505 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.600511 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.600517 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.600523 | orchestrator | 2025-09-19 11:40:26.600529 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 11:40:26.600535 | orchestrator | Friday 19 September 2025 11:33:05 +0000 (0:00:00.513) 0:03:26.774 ****** 2025-09-19 11:40:26.600541 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.600547 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.600553 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.600560 | orchestrator | 2025-09-19 11:40:26.600566 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 11:40:26.600572 | orchestrator | Friday 19 September 2025 11:33:05 +0000 (0:00:00.453) 0:03:27.227 ****** 2025-09-19 11:40:26.600578 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.600584 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.600590 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.600596 | orchestrator | 2025-09-19 11:40:26.600602 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 11:40:26.600608 | orchestrator | Friday 19 September 2025 11:33:06 +0000 (0:00:00.524) 0:03:27.752 ****** 2025-09-19 11:40:26.600614 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.600620 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.600629 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.600635 | orchestrator | 2025-09-19 11:40:26.600641 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 11:40:26.600647 | orchestrator | Friday 19 September 2025 11:33:07 +0000 (0:00:00.874) 0:03:28.626 ****** 2025-09-19 11:40:26.600653 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.600660 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.600670 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.600676 | orchestrator | 2025-09-19 11:40:26.600682 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 11:40:26.600688 | orchestrator | Friday 19 September 2025 11:33:07 +0000 (0:00:00.275) 0:03:28.901 ****** 2025-09-19 11:40:26.600694 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.600700 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.600706 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.600713 | orchestrator | 2025-09-19 11:40:26.600737 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 11:40:26.600745 | orchestrator | Friday 19 September 2025 11:33:07 +0000 (0:00:00.405) 0:03:29.307 ****** 2025-09-19 11:40:26.600751 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.600757 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.600763 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.600769 | orchestrator | 2025-09-19 11:40:26.600775 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 11:40:26.600781 | orchestrator | Friday 19 September 2025 11:33:08 +0000 (0:00:00.672) 0:03:29.980 ****** 2025-09-19 11:40:26.600787 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.600793 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.600799 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.600805 | orchestrator | 2025-09-19 11:40:26.600812 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 11:40:26.600818 | orchestrator | Friday 19 September 2025 11:33:09 +0000 (0:00:00.761) 0:03:30.741 ****** 2025-09-19 11:40:26.600824 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.600830 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.600836 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.600842 | orchestrator | 2025-09-19 11:40:26.600848 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 11:40:26.600854 | orchestrator | Friday 19 September 2025 11:33:09 +0000 (0:00:00.542) 0:03:31.283 ****** 2025-09-19 11:40:26.600860 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.600866 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.600872 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.600878 | orchestrator | 2025-09-19 11:40:26.600884 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 11:40:26.600890 | orchestrator | Friday 19 September 2025 11:33:10 +0000 (0:00:00.669) 0:03:31.952 ****** 2025-09-19 11:40:26.600896 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.600902 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.600909 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.600915 | orchestrator | 2025-09-19 11:40:26.600921 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 11:40:26.600927 | orchestrator | Friday 19 September 2025 11:33:10 +0000 (0:00:00.563) 0:03:32.516 ****** 2025-09-19 11:40:26.600933 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.600939 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.600945 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.600951 | orchestrator | 2025-09-19 11:40:26.600957 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 11:40:26.600963 | orchestrator | Friday 19 September 2025 11:33:11 +0000 (0:00:00.362) 0:03:32.879 ****** 2025-09-19 11:40:26.600969 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.600975 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.600981 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.600987 | orchestrator | 2025-09-19 11:40:26.600993 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 11:40:26.601000 | orchestrator | Friday 19 September 2025 11:33:11 +0000 (0:00:00.422) 0:03:33.301 ****** 2025-09-19 11:40:26.601006 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.601012 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.601018 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.601024 | orchestrator | 2025-09-19 11:40:26.601035 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 11:40:26.601041 | orchestrator | Friday 19 September 2025 11:33:12 +0000 (0:00:00.888) 0:03:34.189 ****** 2025-09-19 11:40:26.601047 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.601053 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.601059 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.601065 | orchestrator | 2025-09-19 11:40:26.601071 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 11:40:26.601077 | orchestrator | Friday 19 September 2025 11:33:12 +0000 (0:00:00.329) 0:03:34.519 ****** 2025-09-19 11:40:26.601083 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.601089 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.601095 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.601101 | orchestrator | 2025-09-19 11:40:26.601107 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 11:40:26.601114 | orchestrator | Friday 19 September 2025 11:33:13 +0000 (0:00:00.382) 0:03:34.901 ****** 2025-09-19 11:40:26.601120 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.601126 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.601132 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.601138 | orchestrator | 2025-09-19 11:40:26.601144 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 11:40:26.601150 | orchestrator | Friday 19 September 2025 11:33:13 +0000 (0:00:00.327) 0:03:35.229 ****** 2025-09-19 11:40:26.601156 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.601162 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.601168 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.601174 | orchestrator | 2025-09-19 11:40:26.601180 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-19 11:40:26.601186 | orchestrator | Friday 19 September 2025 11:33:14 +0000 (0:00:00.753) 0:03:35.982 ****** 2025-09-19 11:40:26.601192 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.601198 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.601204 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.601210 | orchestrator | 2025-09-19 11:40:26.601219 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-19 11:40:26.601225 | orchestrator | Friday 19 September 2025 11:33:14 +0000 (0:00:00.379) 0:03:36.362 ****** 2025-09-19 11:40:26.601231 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.601237 | orchestrator | 2025-09-19 11:40:26.601244 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-19 11:40:26.601250 | orchestrator | Friday 19 September 2025 11:33:15 +0000 (0:00:00.509) 0:03:36.872 ****** 2025-09-19 11:40:26.601256 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.601262 | orchestrator | 2025-09-19 11:40:26.601268 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-19 11:40:26.601292 | orchestrator | Friday 19 September 2025 11:33:15 +0000 (0:00:00.339) 0:03:37.211 ****** 2025-09-19 11:40:26.601324 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-19 11:40:26.601331 | orchestrator | 2025-09-19 11:40:26.601337 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-19 11:40:26.601343 | orchestrator | Friday 19 September 2025 11:33:16 +0000 (0:00:00.804) 0:03:38.015 ****** 2025-09-19 11:40:26.601349 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.601356 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.601362 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.601368 | orchestrator | 2025-09-19 11:40:26.601374 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-19 11:40:26.601380 | orchestrator | Friday 19 September 2025 11:33:16 +0000 (0:00:00.319) 0:03:38.335 ****** 2025-09-19 11:40:26.601386 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.601392 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.601398 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.601404 | orchestrator | 2025-09-19 11:40:26.601415 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-19 11:40:26.601422 | orchestrator | Friday 19 September 2025 11:33:17 +0000 (0:00:00.322) 0:03:38.657 ****** 2025-09-19 11:40:26.601428 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.601434 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.601440 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.601446 | orchestrator | 2025-09-19 11:40:26.601452 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-19 11:40:26.601458 | orchestrator | Friday 19 September 2025 11:33:18 +0000 (0:00:01.319) 0:03:39.977 ****** 2025-09-19 11:40:26.601465 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.601471 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.601477 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.601483 | orchestrator | 2025-09-19 11:40:26.601489 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-19 11:40:26.601496 | orchestrator | Friday 19 September 2025 11:33:19 +0000 (0:00:00.966) 0:03:40.943 ****** 2025-09-19 11:40:26.601502 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.601508 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.601514 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.601520 | orchestrator | 2025-09-19 11:40:26.601526 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-19 11:40:26.601532 | orchestrator | Friday 19 September 2025 11:33:20 +0000 (0:00:00.674) 0:03:41.618 ****** 2025-09-19 11:40:26.601538 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.601544 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.601551 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.601557 | orchestrator | 2025-09-19 11:40:26.601563 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-19 11:40:26.601569 | orchestrator | Friday 19 September 2025 11:33:20 +0000 (0:00:00.643) 0:03:42.261 ****** 2025-09-19 11:40:26.601575 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.601581 | orchestrator | 2025-09-19 11:40:26.601587 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-19 11:40:26.601593 | orchestrator | Friday 19 September 2025 11:33:21 +0000 (0:00:01.220) 0:03:43.481 ****** 2025-09-19 11:40:26.601600 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.601606 | orchestrator | 2025-09-19 11:40:26.601612 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-19 11:40:26.601617 | orchestrator | Friday 19 September 2025 11:33:22 +0000 (0:00:00.670) 0:03:44.152 ****** 2025-09-19 11:40:26.601622 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:40:26.601628 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:40:26.601633 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:40:26.601639 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 11:40:26.601644 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-19 11:40:26.601650 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 11:40:26.601655 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 11:40:26.601660 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-19 11:40:26.601666 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-19 11:40:26.601671 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-19 11:40:26.601676 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 11:40:26.601682 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-19 11:40:26.601687 | orchestrator | 2025-09-19 11:40:26.601693 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-19 11:40:26.601698 | orchestrator | Friday 19 September 2025 11:33:26 +0000 (0:00:03.646) 0:03:47.799 ****** 2025-09-19 11:40:26.601704 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.601709 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.601718 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.601723 | orchestrator | 2025-09-19 11:40:26.601729 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-19 11:40:26.601734 | orchestrator | Friday 19 September 2025 11:33:27 +0000 (0:00:01.456) 0:03:49.256 ****** 2025-09-19 11:40:26.601740 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.601745 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.601753 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.601759 | orchestrator | 2025-09-19 11:40:26.601764 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-19 11:40:26.601770 | orchestrator | Friday 19 September 2025 11:33:27 +0000 (0:00:00.337) 0:03:49.594 ****** 2025-09-19 11:40:26.601775 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.601780 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.601786 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.601791 | orchestrator | 2025-09-19 11:40:26.601796 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-19 11:40:26.601802 | orchestrator | Friday 19 September 2025 11:33:28 +0000 (0:00:00.348) 0:03:49.942 ****** 2025-09-19 11:40:26.601807 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.601813 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.601818 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.601823 | orchestrator | 2025-09-19 11:40:26.601846 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-19 11:40:26.601853 | orchestrator | Friday 19 September 2025 11:33:30 +0000 (0:00:01.862) 0:03:51.804 ****** 2025-09-19 11:40:26.601859 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.601864 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.601869 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.601875 | orchestrator | 2025-09-19 11:40:26.601880 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-19 11:40:26.601885 | orchestrator | Friday 19 September 2025 11:33:31 +0000 (0:00:01.497) 0:03:53.302 ****** 2025-09-19 11:40:26.601891 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.601896 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.601901 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.601907 | orchestrator | 2025-09-19 11:40:26.601912 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-19 11:40:26.601917 | orchestrator | Friday 19 September 2025 11:33:32 +0000 (0:00:00.324) 0:03:53.627 ****** 2025-09-19 11:40:26.601923 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.601928 | orchestrator | 2025-09-19 11:40:26.601933 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-19 11:40:26.601939 | orchestrator | Friday 19 September 2025 11:33:32 +0000 (0:00:00.508) 0:03:54.135 ****** 2025-09-19 11:40:26.601944 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.601949 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.601955 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.601960 | orchestrator | 2025-09-19 11:40:26.601965 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-19 11:40:26.601971 | orchestrator | Friday 19 September 2025 11:33:33 +0000 (0:00:00.547) 0:03:54.683 ****** 2025-09-19 11:40:26.601976 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.601981 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.601987 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.601992 | orchestrator | 2025-09-19 11:40:26.601997 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-19 11:40:26.602003 | orchestrator | Friday 19 September 2025 11:33:33 +0000 (0:00:00.370) 0:03:55.053 ****** 2025-09-19 11:40:26.602008 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-09-19 11:40:26.602029 | orchestrator | 2025-09-19 11:40:26.602036 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-19 11:40:26.602049 | orchestrator | Friday 19 September 2025 11:33:34 +0000 (0:00:00.659) 0:03:55.713 ****** 2025-09-19 11:40:26.602054 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.602059 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.602065 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.602070 | orchestrator | 2025-09-19 11:40:26.602075 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-19 11:40:26.602080 | orchestrator | Friday 19 September 2025 11:33:36 +0000 (0:00:02.317) 0:03:58.031 ****** 2025-09-19 11:40:26.602086 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.602091 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.602096 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.602102 | orchestrator | 2025-09-19 11:40:26.602107 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-19 11:40:26.602113 | orchestrator | Friday 19 September 2025 11:33:37 +0000 (0:00:01.230) 0:03:59.261 ****** 2025-09-19 11:40:26.602118 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.602123 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.602129 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.602134 | orchestrator | 2025-09-19 11:40:26.602139 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-19 11:40:26.602145 | orchestrator | Friday 19 September 2025 11:33:39 +0000 (0:00:01.839) 0:04:01.101 ****** 2025-09-19 11:40:26.602150 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.602155 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.602161 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.602166 | orchestrator | 2025-09-19 11:40:26.602171 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-19 11:40:26.602177 | orchestrator | Friday 19 September 2025 11:33:41 +0000 (0:00:02.001) 0:04:03.102 ****** 2025-09-19 11:40:26.602182 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.602187 | orchestrator | 2025-09-19 11:40:26.602193 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-19 11:40:26.602198 | orchestrator | Friday 19 September 2025 11:33:42 +0000 (0:00:00.816) 0:04:03.918 ****** 2025-09-19 11:40:26.602204 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-19 11:40:26.602209 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.602214 | orchestrator | 2025-09-19 11:40:26.602220 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-19 11:40:26.602225 | orchestrator | Friday 19 September 2025 11:34:04 +0000 (0:00:21.959) 0:04:25.878 ****** 2025-09-19 11:40:26.602233 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.602239 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.602244 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.602250 | orchestrator | 2025-09-19 11:40:26.602255 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-19 11:40:26.602260 | orchestrator | Friday 19 September 2025 11:34:14 +0000 (0:00:10.102) 0:04:35.980 ****** 2025-09-19 11:40:26.602266 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.602271 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.602276 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.602282 | orchestrator | 2025-09-19 11:40:26.602287 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-19 11:40:26.602292 | orchestrator | Friday 19 September 2025 11:34:14 +0000 (0:00:00.330) 0:04:36.311 ****** 2025-09-19 11:40:26.602327 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5aee5be310ce729baf5f1d54f89d49cd456e1dcb'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-19 11:40:26.602340 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5aee5be310ce729baf5f1d54f89d49cd456e1dcb'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-19 11:40:26.602346 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5aee5be310ce729baf5f1d54f89d49cd456e1dcb'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-19 11:40:26.602352 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5aee5be310ce729baf5f1d54f89d49cd456e1dcb'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-19 11:40:26.602357 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5aee5be310ce729baf5f1d54f89d49cd456e1dcb'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-19 11:40:26.602363 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5aee5be310ce729baf5f1d54f89d49cd456e1dcb'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__5aee5be310ce729baf5f1d54f89d49cd456e1dcb'}])  2025-09-19 11:40:26.602369 | orchestrator | 2025-09-19 11:40:26.602375 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 11:40:26.602380 | orchestrator | Friday 19 September 2025 11:34:29 +0000 (0:00:14.937) 0:04:51.249 ****** 2025-09-19 11:40:26.602385 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.602391 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.602396 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.602401 | orchestrator | 2025-09-19 11:40:26.602407 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-19 11:40:26.602412 | orchestrator | Friday 19 September 2025 11:34:30 +0000 (0:00:00.402) 0:04:51.651 ****** 2025-09-19 11:40:26.602417 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.602423 | orchestrator | 2025-09-19 11:40:26.602428 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-19 11:40:26.602434 | orchestrator | Friday 19 September 2025 11:34:30 +0000 (0:00:00.869) 0:04:52.521 ****** 2025-09-19 11:40:26.602439 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.602444 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.602449 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.602455 | orchestrator | 2025-09-19 11:40:26.602460 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-19 11:40:26.602465 | orchestrator | Friday 19 September 2025 11:34:31 +0000 (0:00:00.451) 0:04:52.972 ****** 2025-09-19 11:40:26.602471 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.602476 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.602481 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.602487 | orchestrator | 2025-09-19 11:40:26.602492 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-19 11:40:26.602500 | orchestrator | Friday 19 September 2025 11:34:31 +0000 (0:00:00.440) 0:04:53.413 ****** 2025-09-19 11:40:26.602509 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 11:40:26.602515 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 11:40:26.602520 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 11:40:26.602526 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.602531 | orchestrator | 2025-09-19 11:40:26.602536 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-19 11:40:26.602542 | orchestrator | Friday 19 September 2025 11:34:32 +0000 (0:00:00.667) 0:04:54.080 ****** 2025-09-19 11:40:26.602547 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.602552 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.602558 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.602563 | orchestrator | 2025-09-19 11:40:26.602584 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-19 11:40:26.602591 | orchestrator | 2025-09-19 11:40:26.602596 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 11:40:26.602601 | orchestrator | Friday 19 September 2025 11:34:33 +0000 (0:00:00.879) 0:04:54.960 ****** 2025-09-19 11:40:26.602607 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.602612 | orchestrator | 2025-09-19 11:40:26.602618 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 11:40:26.602623 | orchestrator | Friday 19 September 2025 11:34:33 +0000 (0:00:00.573) 0:04:55.533 ****** 2025-09-19 11:40:26.602628 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.602634 | orchestrator | 2025-09-19 11:40:26.602639 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 11:40:26.602645 | orchestrator | Friday 19 September 2025 11:34:34 +0000 (0:00:00.552) 0:04:56.086 ****** 2025-09-19 11:40:26.602650 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.602655 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.602660 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.602666 | orchestrator | 2025-09-19 11:40:26.602671 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 11:40:26.602676 | orchestrator | Friday 19 September 2025 11:34:35 +0000 (0:00:01.062) 0:04:57.148 ****** 2025-09-19 11:40:26.602682 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.602687 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.602692 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.602698 | orchestrator | 2025-09-19 11:40:26.602703 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 11:40:26.602708 | orchestrator | Friday 19 September 2025 11:34:35 +0000 (0:00:00.328) 0:04:57.477 ****** 2025-09-19 11:40:26.602714 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.602719 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.602724 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.602729 | orchestrator | 2025-09-19 11:40:26.602735 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 11:40:26.602740 | orchestrator | Friday 19 September 2025 11:34:36 +0000 (0:00:00.299) 0:04:57.776 ****** 2025-09-19 11:40:26.602745 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.602750 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.602756 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.602761 | orchestrator | 2025-09-19 11:40:26.602766 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 11:40:26.602772 | orchestrator | Friday 19 September 2025 11:34:36 +0000 (0:00:00.330) 0:04:58.106 ****** 2025-09-19 11:40:26.602777 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.602782 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.602788 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.602793 | orchestrator | 2025-09-19 11:40:26.602798 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 11:40:26.602807 | orchestrator | Friday 19 September 2025 11:34:37 +0000 (0:00:01.009) 0:04:59.116 ****** 2025-09-19 11:40:26.602812 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.602818 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.602823 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.602828 | orchestrator | 2025-09-19 11:40:26.602833 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 11:40:26.602839 | orchestrator | Friday 19 September 2025 11:34:37 +0000 (0:00:00.330) 0:04:59.446 ****** 2025-09-19 11:40:26.602844 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.602849 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.602855 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.602860 | orchestrator | 2025-09-19 11:40:26.602865 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 11:40:26.602871 | orchestrator | Friday 19 September 2025 11:34:38 +0000 (0:00:00.292) 0:04:59.739 ****** 2025-09-19 11:40:26.602876 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.602881 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.602887 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.602892 | orchestrator | 2025-09-19 11:40:26.602897 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 11:40:26.602903 | orchestrator | Friday 19 September 2025 11:34:38 +0000 (0:00:00.734) 0:05:00.473 ****** 2025-09-19 11:40:26.602908 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.602913 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.602919 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.602924 | orchestrator | 2025-09-19 11:40:26.602929 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 11:40:26.602934 | orchestrator | Friday 19 September 2025 11:34:39 +0000 (0:00:01.008) 0:05:01.482 ****** 2025-09-19 11:40:26.602940 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.602945 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.602950 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.602956 | orchestrator | 2025-09-19 11:40:26.602961 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 11:40:26.602967 | orchestrator | Friday 19 September 2025 11:34:40 +0000 (0:00:00.310) 0:05:01.792 ****** 2025-09-19 11:40:26.602972 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.602980 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.602985 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.602991 | orchestrator | 2025-09-19 11:40:26.602996 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 11:40:26.603002 | orchestrator | Friday 19 September 2025 11:34:40 +0000 (0:00:00.351) 0:05:02.143 ****** 2025-09-19 11:40:26.603007 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.603012 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.603018 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.603023 | orchestrator | 2025-09-19 11:40:26.603028 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 11:40:26.603034 | orchestrator | Friday 19 September 2025 11:34:40 +0000 (0:00:00.291) 0:05:02.435 ****** 2025-09-19 11:40:26.603039 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.603045 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.603065 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.603072 | orchestrator | 2025-09-19 11:40:26.603077 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 11:40:26.603083 | orchestrator | Friday 19 September 2025 11:34:41 +0000 (0:00:00.550) 0:05:02.985 ****** 2025-09-19 11:40:26.603088 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.603093 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.603099 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.603104 | orchestrator | 2025-09-19 11:40:26.603109 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 11:40:26.603115 | orchestrator | Friday 19 September 2025 11:34:41 +0000 (0:00:00.310) 0:05:03.296 ****** 2025-09-19 11:40:26.603124 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.603129 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.603135 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.603140 | orchestrator | 2025-09-19 11:40:26.603145 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 11:40:26.603151 | orchestrator | Friday 19 September 2025 11:34:41 +0000 (0:00:00.299) 0:05:03.595 ****** 2025-09-19 11:40:26.603156 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.603161 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.603167 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.603172 | orchestrator | 2025-09-19 11:40:26.603177 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 11:40:26.603182 | orchestrator | Friday 19 September 2025 11:34:42 +0000 (0:00:00.404) 0:05:04.000 ****** 2025-09-19 11:40:26.603188 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.603193 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.603198 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.603204 | orchestrator | 2025-09-19 11:40:26.603209 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 11:40:26.603214 | orchestrator | Friday 19 September 2025 11:34:42 +0000 (0:00:00.339) 0:05:04.339 ****** 2025-09-19 11:40:26.603220 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.603225 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.603230 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.603235 | orchestrator | 2025-09-19 11:40:26.603241 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 11:40:26.603246 | orchestrator | Friday 19 September 2025 11:34:43 +0000 (0:00:00.698) 0:05:05.038 ****** 2025-09-19 11:40:26.603251 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.603257 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.603262 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.603267 | orchestrator | 2025-09-19 11:40:26.603273 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-19 11:40:26.603278 | orchestrator | Friday 19 September 2025 11:34:43 +0000 (0:00:00.525) 0:05:05.563 ****** 2025-09-19 11:40:26.603283 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 11:40:26.603289 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:40:26.603294 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:40:26.603309 | orchestrator | 2025-09-19 11:40:26.603315 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-19 11:40:26.603320 | orchestrator | Friday 19 September 2025 11:34:44 +0000 (0:00:00.935) 0:05:06.499 ****** 2025-09-19 11:40:26.603326 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.603331 | orchestrator | 2025-09-19 11:40:26.603337 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-19 11:40:26.603342 | orchestrator | Friday 19 September 2025 11:34:45 +0000 (0:00:00.761) 0:05:07.260 ****** 2025-09-19 11:40:26.603347 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.603353 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.603358 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.603364 | orchestrator | 2025-09-19 11:40:26.603369 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-19 11:40:26.603374 | orchestrator | Friday 19 September 2025 11:34:46 +0000 (0:00:00.735) 0:05:07.996 ****** 2025-09-19 11:40:26.603380 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.603385 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.603391 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.603396 | orchestrator | 2025-09-19 11:40:26.603401 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-19 11:40:26.603407 | orchestrator | Friday 19 September 2025 11:34:46 +0000 (0:00:00.317) 0:05:08.313 ****** 2025-09-19 11:40:26.603416 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:40:26.603422 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:40:26.603427 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:40:26.603433 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-19 11:40:26.603438 | orchestrator | 2025-09-19 11:40:26.603444 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-19 11:40:26.603449 | orchestrator | Friday 19 September 2025 11:34:57 +0000 (0:00:10.620) 0:05:18.934 ****** 2025-09-19 11:40:26.603454 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.603460 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.603465 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.603470 | orchestrator | 2025-09-19 11:40:26.603478 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-19 11:40:26.603484 | orchestrator | Friday 19 September 2025 11:34:58 +0000 (0:00:00.939) 0:05:19.874 ****** 2025-09-19 11:40:26.603489 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 11:40:26.603495 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 11:40:26.603501 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 11:40:26.603506 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-19 11:40:26.603512 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:40:26.603517 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:40:26.603522 | orchestrator | 2025-09-19 11:40:26.603545 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-19 11:40:26.603551 | orchestrator | Friday 19 September 2025 11:35:00 +0000 (0:00:02.341) 0:05:22.215 ****** 2025-09-19 11:40:26.603557 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 11:40:26.603562 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 11:40:26.603568 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 11:40:26.603573 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:40:26.603578 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-19 11:40:26.603583 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-19 11:40:26.603589 | orchestrator | 2025-09-19 11:40:26.603594 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-19 11:40:26.603599 | orchestrator | Friday 19 September 2025 11:35:01 +0000 (0:00:01.261) 0:05:23.477 ****** 2025-09-19 11:40:26.603605 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.603610 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.603615 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.603621 | orchestrator | 2025-09-19 11:40:26.603626 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-19 11:40:26.603631 | orchestrator | Friday 19 September 2025 11:35:02 +0000 (0:00:00.685) 0:05:24.162 ****** 2025-09-19 11:40:26.603637 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.603642 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.603647 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.603653 | orchestrator | 2025-09-19 11:40:26.603658 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-19 11:40:26.603663 | orchestrator | Friday 19 September 2025 11:35:03 +0000 (0:00:00.591) 0:05:24.754 ****** 2025-09-19 11:40:26.603669 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.603674 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.603679 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.603685 | orchestrator | 2025-09-19 11:40:26.603690 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-19 11:40:26.603695 | orchestrator | Friday 19 September 2025 11:35:03 +0000 (0:00:00.305) 0:05:25.060 ****** 2025-09-19 11:40:26.603701 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.603706 | orchestrator | 2025-09-19 11:40:26.603715 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-19 11:40:26.603720 | orchestrator | Friday 19 September 2025 11:35:04 +0000 (0:00:00.557) 0:05:25.617 ****** 2025-09-19 11:40:26.603726 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.603731 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.603736 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.603742 | orchestrator | 2025-09-19 11:40:26.603747 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-19 11:40:26.603752 | orchestrator | Friday 19 September 2025 11:35:04 +0000 (0:00:00.377) 0:05:25.994 ****** 2025-09-19 11:40:26.603758 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.603763 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.603769 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.603774 | orchestrator | 2025-09-19 11:40:26.603779 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-19 11:40:26.603785 | orchestrator | Friday 19 September 2025 11:35:05 +0000 (0:00:00.708) 0:05:26.703 ****** 2025-09-19 11:40:26.603790 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.603795 | orchestrator | 2025-09-19 11:40:26.603801 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-19 11:40:26.603806 | orchestrator | Friday 19 September 2025 11:35:05 +0000 (0:00:00.518) 0:05:27.221 ****** 2025-09-19 11:40:26.603812 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.603817 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.603822 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.603828 | orchestrator | 2025-09-19 11:40:26.603833 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-19 11:40:26.603838 | orchestrator | Friday 19 September 2025 11:35:06 +0000 (0:00:01.349) 0:05:28.571 ****** 2025-09-19 11:40:26.603844 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.603849 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.603855 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.603860 | orchestrator | 2025-09-19 11:40:26.603865 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-19 11:40:26.603871 | orchestrator | Friday 19 September 2025 11:35:08 +0000 (0:00:01.310) 0:05:29.881 ****** 2025-09-19 11:40:26.603876 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.603881 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.603887 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.603892 | orchestrator | 2025-09-19 11:40:26.603897 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-19 11:40:26.603903 | orchestrator | Friday 19 September 2025 11:35:10 +0000 (0:00:01.742) 0:05:31.624 ****** 2025-09-19 11:40:26.603908 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.603913 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.603919 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.603924 | orchestrator | 2025-09-19 11:40:26.603932 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-19 11:40:26.603937 | orchestrator | Friday 19 September 2025 11:35:12 +0000 (0:00:01.997) 0:05:33.622 ****** 2025-09-19 11:40:26.603943 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.603948 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.603953 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-19 11:40:26.603959 | orchestrator | 2025-09-19 11:40:26.603964 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-19 11:40:26.603969 | orchestrator | Friday 19 September 2025 11:35:12 +0000 (0:00:00.414) 0:05:34.036 ****** 2025-09-19 11:40:26.603975 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-19 11:40:26.603996 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-19 11:40:26.604003 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-19 11:40:26.604013 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-19 11:40:26.604018 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-19 11:40:26.604024 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2025-09-19 11:40:26.604029 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:40:26.604034 | orchestrator | 2025-09-19 11:40:26.604040 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-19 11:40:26.604045 | orchestrator | Friday 19 September 2025 11:35:49 +0000 (0:00:37.008) 0:06:11.045 ****** 2025-09-19 11:40:26.604050 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:40:26.604055 | orchestrator | 2025-09-19 11:40:26.604061 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-19 11:40:26.604066 | orchestrator | Friday 19 September 2025 11:35:50 +0000 (0:00:01.312) 0:06:12.358 ****** 2025-09-19 11:40:26.604071 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.604077 | orchestrator | 2025-09-19 11:40:26.604082 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-19 11:40:26.604087 | orchestrator | Friday 19 September 2025 11:35:51 +0000 (0:00:00.324) 0:06:12.682 ****** 2025-09-19 11:40:26.604093 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.604098 | orchestrator | 2025-09-19 11:40:26.604103 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-19 11:40:26.604108 | orchestrator | Friday 19 September 2025 11:35:51 +0000 (0:00:00.147) 0:06:12.829 ****** 2025-09-19 11:40:26.604114 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-19 11:40:26.604119 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-19 11:40:26.604124 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-19 11:40:26.604129 | orchestrator | 2025-09-19 11:40:26.604135 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-19 11:40:26.604140 | orchestrator | Friday 19 September 2025 11:35:57 +0000 (0:00:06.342) 0:06:19.172 ****** 2025-09-19 11:40:26.604145 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-19 11:40:26.604150 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-19 11:40:26.604156 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-19 11:40:26.604161 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-19 11:40:26.604166 | orchestrator | 2025-09-19 11:40:26.604171 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 11:40:26.604177 | orchestrator | Friday 19 September 2025 11:36:02 +0000 (0:00:04.745) 0:06:23.917 ****** 2025-09-19 11:40:26.604182 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.604187 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.604193 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.604198 | orchestrator | 2025-09-19 11:40:26.604203 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-19 11:40:26.604209 | orchestrator | Friday 19 September 2025 11:36:03 +0000 (0:00:00.964) 0:06:24.882 ****** 2025-09-19 11:40:26.604214 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.604219 | orchestrator | 2025-09-19 11:40:26.604225 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-19 11:40:26.604230 | orchestrator | Friday 19 September 2025 11:36:03 +0000 (0:00:00.540) 0:06:25.423 ****** 2025-09-19 11:40:26.604235 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.604240 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.604249 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.604254 | orchestrator | 2025-09-19 11:40:26.604259 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-19 11:40:26.604265 | orchestrator | Friday 19 September 2025 11:36:04 +0000 (0:00:00.365) 0:06:25.788 ****** 2025-09-19 11:40:26.604270 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.604275 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.604281 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.604286 | orchestrator | 2025-09-19 11:40:26.604291 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-19 11:40:26.604305 | orchestrator | Friday 19 September 2025 11:36:05 +0000 (0:00:01.387) 0:06:27.176 ****** 2025-09-19 11:40:26.604311 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 11:40:26.604316 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 11:40:26.604324 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 11:40:26.604330 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.604335 | orchestrator | 2025-09-19 11:40:26.604340 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-19 11:40:26.604346 | orchestrator | Friday 19 September 2025 11:36:06 +0000 (0:00:00.616) 0:06:27.793 ****** 2025-09-19 11:40:26.604351 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.604356 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.604362 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.604367 | orchestrator | 2025-09-19 11:40:26.604372 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-19 11:40:26.604378 | orchestrator | 2025-09-19 11:40:26.604383 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 11:40:26.604389 | orchestrator | Friday 19 September 2025 11:36:06 +0000 (0:00:00.554) 0:06:28.347 ****** 2025-09-19 11:40:26.604412 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.604419 | orchestrator | 2025-09-19 11:40:26.604424 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 11:40:26.604429 | orchestrator | Friday 19 September 2025 11:36:07 +0000 (0:00:00.726) 0:06:29.074 ****** 2025-09-19 11:40:26.604435 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.604440 | orchestrator | 2025-09-19 11:40:26.604446 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 11:40:26.604451 | orchestrator | Friday 19 September 2025 11:36:08 +0000 (0:00:00.561) 0:06:29.636 ****** 2025-09-19 11:40:26.604456 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.604462 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.604467 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.604472 | orchestrator | 2025-09-19 11:40:26.604478 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 11:40:26.604483 | orchestrator | Friday 19 September 2025 11:36:08 +0000 (0:00:00.303) 0:06:29.939 ****** 2025-09-19 11:40:26.604488 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.604494 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.604499 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.604504 | orchestrator | 2025-09-19 11:40:26.604510 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 11:40:26.604515 | orchestrator | Friday 19 September 2025 11:36:09 +0000 (0:00:00.927) 0:06:30.866 ****** 2025-09-19 11:40:26.604520 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.604526 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.604531 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.604536 | orchestrator | 2025-09-19 11:40:26.604542 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 11:40:26.604547 | orchestrator | Friday 19 September 2025 11:36:10 +0000 (0:00:00.757) 0:06:31.623 ****** 2025-09-19 11:40:26.604552 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.604563 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.604568 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.604573 | orchestrator | 2025-09-19 11:40:26.604579 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 11:40:26.604584 | orchestrator | Friday 19 September 2025 11:36:10 +0000 (0:00:00.721) 0:06:32.345 ****** 2025-09-19 11:40:26.604589 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.604595 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.604600 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.604605 | orchestrator | 2025-09-19 11:40:26.604611 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 11:40:26.604616 | orchestrator | Friday 19 September 2025 11:36:11 +0000 (0:00:00.321) 0:06:32.667 ****** 2025-09-19 11:40:26.604621 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.604627 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.604632 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.604637 | orchestrator | 2025-09-19 11:40:26.604643 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 11:40:26.604648 | orchestrator | Friday 19 September 2025 11:36:11 +0000 (0:00:00.606) 0:06:33.273 ****** 2025-09-19 11:40:26.604653 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.604658 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.604664 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.604669 | orchestrator | 2025-09-19 11:40:26.604674 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 11:40:26.604680 | orchestrator | Friday 19 September 2025 11:36:11 +0000 (0:00:00.315) 0:06:33.589 ****** 2025-09-19 11:40:26.604685 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.604690 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.604696 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.604701 | orchestrator | 2025-09-19 11:40:26.604706 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 11:40:26.604711 | orchestrator | Friday 19 September 2025 11:36:12 +0000 (0:00:00.749) 0:06:34.339 ****** 2025-09-19 11:40:26.604717 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.604722 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.604727 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.604733 | orchestrator | 2025-09-19 11:40:26.604738 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 11:40:26.604743 | orchestrator | Friday 19 September 2025 11:36:13 +0000 (0:00:00.757) 0:06:35.096 ****** 2025-09-19 11:40:26.604749 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.604754 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.604759 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.604764 | orchestrator | 2025-09-19 11:40:26.604770 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 11:40:26.604775 | orchestrator | Friday 19 September 2025 11:36:14 +0000 (0:00:00.612) 0:06:35.709 ****** 2025-09-19 11:40:26.604780 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.604786 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.604791 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.604796 | orchestrator | 2025-09-19 11:40:26.604802 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 11:40:26.604809 | orchestrator | Friday 19 September 2025 11:36:14 +0000 (0:00:00.330) 0:06:36.039 ****** 2025-09-19 11:40:26.604815 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.604820 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.604825 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.604831 | orchestrator | 2025-09-19 11:40:26.604836 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 11:40:26.604841 | orchestrator | Friday 19 September 2025 11:36:14 +0000 (0:00:00.331) 0:06:36.371 ****** 2025-09-19 11:40:26.604846 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.604852 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.604857 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.604865 | orchestrator | 2025-09-19 11:40:26.604871 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 11:40:26.604876 | orchestrator | Friday 19 September 2025 11:36:15 +0000 (0:00:00.327) 0:06:36.699 ****** 2025-09-19 11:40:26.604881 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.604887 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.604895 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.604900 | orchestrator | 2025-09-19 11:40:26.604906 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 11:40:26.604911 | orchestrator | Friday 19 September 2025 11:36:15 +0000 (0:00:00.650) 0:06:37.350 ****** 2025-09-19 11:40:26.604916 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.604922 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.604927 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.604932 | orchestrator | 2025-09-19 11:40:26.604937 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 11:40:26.604943 | orchestrator | Friday 19 September 2025 11:36:16 +0000 (0:00:00.327) 0:06:37.678 ****** 2025-09-19 11:40:26.604948 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.604953 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.604959 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.604964 | orchestrator | 2025-09-19 11:40:26.604969 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 11:40:26.604975 | orchestrator | Friday 19 September 2025 11:36:16 +0000 (0:00:00.295) 0:06:37.973 ****** 2025-09-19 11:40:26.604980 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.604985 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.604990 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.604996 | orchestrator | 2025-09-19 11:40:26.605001 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 11:40:26.605006 | orchestrator | Friday 19 September 2025 11:36:16 +0000 (0:00:00.299) 0:06:38.273 ****** 2025-09-19 11:40:26.605012 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.605017 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.605022 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.605027 | orchestrator | 2025-09-19 11:40:26.605033 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 11:40:26.605038 | orchestrator | Friday 19 September 2025 11:36:17 +0000 (0:00:00.660) 0:06:38.933 ****** 2025-09-19 11:40:26.605043 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.605049 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.605054 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.605059 | orchestrator | 2025-09-19 11:40:26.605065 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-19 11:40:26.605070 | orchestrator | Friday 19 September 2025 11:36:17 +0000 (0:00:00.532) 0:06:39.466 ****** 2025-09-19 11:40:26.605075 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.605080 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.605086 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.605091 | orchestrator | 2025-09-19 11:40:26.605096 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-19 11:40:26.605102 | orchestrator | Friday 19 September 2025 11:36:18 +0000 (0:00:00.308) 0:06:39.774 ****** 2025-09-19 11:40:26.605107 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 11:40:26.605112 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:40:26.605118 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:40:26.605123 | orchestrator | 2025-09-19 11:40:26.605128 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-19 11:40:26.605133 | orchestrator | Friday 19 September 2025 11:36:19 +0000 (0:00:00.867) 0:06:40.642 ****** 2025-09-19 11:40:26.605139 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.605148 | orchestrator | 2025-09-19 11:40:26.605153 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-19 11:40:26.605158 | orchestrator | Friday 19 September 2025 11:36:19 +0000 (0:00:00.786) 0:06:41.428 ****** 2025-09-19 11:40:26.605164 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.605169 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.605175 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.605180 | orchestrator | 2025-09-19 11:40:26.605185 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-19 11:40:26.605191 | orchestrator | Friday 19 September 2025 11:36:20 +0000 (0:00:00.300) 0:06:41.729 ****** 2025-09-19 11:40:26.605196 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.605201 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.605206 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.605212 | orchestrator | 2025-09-19 11:40:26.605217 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-19 11:40:26.605222 | orchestrator | Friday 19 September 2025 11:36:20 +0000 (0:00:00.329) 0:06:42.058 ****** 2025-09-19 11:40:26.605228 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.605233 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.605238 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.605243 | orchestrator | 2025-09-19 11:40:26.605249 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-19 11:40:26.605254 | orchestrator | Friday 19 September 2025 11:36:21 +0000 (0:00:00.966) 0:06:43.025 ****** 2025-09-19 11:40:26.605259 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.605265 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.605270 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.605275 | orchestrator | 2025-09-19 11:40:26.605283 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-19 11:40:26.605289 | orchestrator | Friday 19 September 2025 11:36:21 +0000 (0:00:00.380) 0:06:43.405 ****** 2025-09-19 11:40:26.605294 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 11:40:26.605324 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 11:40:26.605330 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 11:40:26.605336 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 11:40:26.605345 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 11:40:26.605351 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 11:40:26.605356 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 11:40:26.605362 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 11:40:26.605367 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 11:40:26.605373 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 11:40:26.605378 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 11:40:26.605384 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 11:40:26.605389 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 11:40:26.605394 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 11:40:26.605400 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 11:40:26.605405 | orchestrator | 2025-09-19 11:40:26.605410 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-19 11:40:26.605416 | orchestrator | Friday 19 September 2025 11:36:24 +0000 (0:00:02.259) 0:06:45.665 ****** 2025-09-19 11:40:26.605425 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.605430 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.605436 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.605441 | orchestrator | 2025-09-19 11:40:26.605446 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-19 11:40:26.605452 | orchestrator | Friday 19 September 2025 11:36:24 +0000 (0:00:00.305) 0:06:45.971 ****** 2025-09-19 11:40:26.605457 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.605462 | orchestrator | 2025-09-19 11:40:26.605468 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-19 11:40:26.605473 | orchestrator | Friday 19 September 2025 11:36:25 +0000 (0:00:00.774) 0:06:46.746 ****** 2025-09-19 11:40:26.605478 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 11:40:26.605484 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 11:40:26.605489 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 11:40:26.605494 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-19 11:40:26.605500 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-19 11:40:26.605505 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-19 11:40:26.605510 | orchestrator | 2025-09-19 11:40:26.605516 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-19 11:40:26.605521 | orchestrator | Friday 19 September 2025 11:36:26 +0000 (0:00:01.093) 0:06:47.839 ****** 2025-09-19 11:40:26.605527 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:40:26.605532 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 11:40:26.605537 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 11:40:26.605543 | orchestrator | 2025-09-19 11:40:26.605548 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-19 11:40:26.605553 | orchestrator | Friday 19 September 2025 11:36:28 +0000 (0:00:02.144) 0:06:49.983 ****** 2025-09-19 11:40:26.605559 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 11:40:26.605564 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 11:40:26.605569 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.605575 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 11:40:26.605580 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 11:40:26.605586 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.605591 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 11:40:26.605596 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 11:40:26.605602 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.605607 | orchestrator | 2025-09-19 11:40:26.605612 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-19 11:40:26.605618 | orchestrator | Friday 19 September 2025 11:36:29 +0000 (0:00:01.525) 0:06:51.509 ****** 2025-09-19 11:40:26.605623 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:40:26.605629 | orchestrator | 2025-09-19 11:40:26.605634 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-19 11:40:26.605639 | orchestrator | Friday 19 September 2025 11:36:32 +0000 (0:00:02.262) 0:06:53.771 ****** 2025-09-19 11:40:26.605645 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.605650 | orchestrator | 2025-09-19 11:40:26.605658 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-19 11:40:26.605663 | orchestrator | Friday 19 September 2025 11:36:32 +0000 (0:00:00.536) 0:06:54.307 ****** 2025-09-19 11:40:26.605669 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ac676d1d-4f4c-546f-a12f-f85171bcd1d7', 'data_vg': 'ceph-ac676d1d-4f4c-546f-a12f-f85171bcd1d7'}) 2025-09-19 11:40:26.605678 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c75d7215-6866-5647-89df-878c4666c32d', 'data_vg': 'ceph-c75d7215-6866-5647-89df-878c4666c32d'}) 2025-09-19 11:40:26.605686 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9d0af248-3195-52cb-bed6-977ad9e4ee39', 'data_vg': 'ceph-9d0af248-3195-52cb-bed6-977ad9e4ee39'}) 2025-09-19 11:40:26.605692 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ffd16df6-6207-59ff-a831-a7eb6df6d5c2', 'data_vg': 'ceph-ffd16df6-6207-59ff-a831-a7eb6df6d5c2'}) 2025-09-19 11:40:26.605698 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0', 'data_vg': 'ceph-b93a97a3-21ec-5dc9-a656-27e3bfc6d1b0'}) 2025-09-19 11:40:26.605703 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6e702043-5e82-5f33-ad25-d539496f9fd1', 'data_vg': 'ceph-6e702043-5e82-5f33-ad25-d539496f9fd1'}) 2025-09-19 11:40:26.605709 | orchestrator | 2025-09-19 11:40:26.605714 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-19 11:40:26.605720 | orchestrator | Friday 19 September 2025 11:37:10 +0000 (0:00:37.748) 0:07:32.056 ****** 2025-09-19 11:40:26.605725 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.605730 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.605736 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.605741 | orchestrator | 2025-09-19 11:40:26.605746 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-19 11:40:26.605752 | orchestrator | Friday 19 September 2025 11:37:10 +0000 (0:00:00.554) 0:07:32.610 ****** 2025-09-19 11:40:26.605757 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.605762 | orchestrator | 2025-09-19 11:40:26.605768 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-19 11:40:26.605773 | orchestrator | Friday 19 September 2025 11:37:11 +0000 (0:00:00.532) 0:07:33.143 ****** 2025-09-19 11:40:26.605778 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.605784 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.605789 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.605795 | orchestrator | 2025-09-19 11:40:26.605800 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-19 11:40:26.605806 | orchestrator | Friday 19 September 2025 11:37:12 +0000 (0:00:00.698) 0:07:33.842 ****** 2025-09-19 11:40:26.605811 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.605816 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.605822 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.605827 | orchestrator | 2025-09-19 11:40:26.605832 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-19 11:40:26.605838 | orchestrator | Friday 19 September 2025 11:37:15 +0000 (0:00:02.867) 0:07:36.709 ****** 2025-09-19 11:40:26.605843 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.605849 | orchestrator | 2025-09-19 11:40:26.605854 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-19 11:40:26.605859 | orchestrator | Friday 19 September 2025 11:37:15 +0000 (0:00:00.524) 0:07:37.233 ****** 2025-09-19 11:40:26.605865 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.605870 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.605875 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.605881 | orchestrator | 2025-09-19 11:40:26.605885 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-19 11:40:26.605890 | orchestrator | Friday 19 September 2025 11:37:16 +0000 (0:00:01.285) 0:07:38.519 ****** 2025-09-19 11:40:26.605895 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.605900 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.605905 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.605909 | orchestrator | 2025-09-19 11:40:26.605914 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-19 11:40:26.605922 | orchestrator | Friday 19 September 2025 11:37:18 +0000 (0:00:01.423) 0:07:39.943 ****** 2025-09-19 11:40:26.605926 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.605931 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.605936 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.605941 | orchestrator | 2025-09-19 11:40:26.605946 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-19 11:40:26.605950 | orchestrator | Friday 19 September 2025 11:37:20 +0000 (0:00:01.802) 0:07:41.745 ****** 2025-09-19 11:40:26.605955 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.605960 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.605965 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.605969 | orchestrator | 2025-09-19 11:40:26.605974 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-19 11:40:26.605979 | orchestrator | Friday 19 September 2025 11:37:20 +0000 (0:00:00.335) 0:07:42.081 ****** 2025-09-19 11:40:26.605984 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.605988 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.605993 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.605998 | orchestrator | 2025-09-19 11:40:26.606003 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-19 11:40:26.606007 | orchestrator | Friday 19 September 2025 11:37:20 +0000 (0:00:00.310) 0:07:42.392 ****** 2025-09-19 11:40:26.606012 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-09-19 11:40:26.606033 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-09-19 11:40:26.606038 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-09-19 11:40:26.606043 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 11:40:26.606048 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-09-19 11:40:26.606053 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-09-19 11:40:26.606058 | orchestrator | 2025-09-19 11:40:26.606063 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-19 11:40:26.606067 | orchestrator | Friday 19 September 2025 11:37:22 +0000 (0:00:01.279) 0:07:43.671 ****** 2025-09-19 11:40:26.606072 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-19 11:40:26.606077 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-19 11:40:26.606082 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-19 11:40:26.606086 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-19 11:40:26.606094 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-09-19 11:40:26.606099 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-09-19 11:40:26.606104 | orchestrator | 2025-09-19 11:40:26.606108 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-19 11:40:26.606113 | orchestrator | Friday 19 September 2025 11:37:24 +0000 (0:00:02.152) 0:07:45.823 ****** 2025-09-19 11:40:26.606118 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-19 11:40:26.606123 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-19 11:40:26.606128 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-19 11:40:26.606132 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-19 11:40:26.606137 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-09-19 11:40:26.606142 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-09-19 11:40:26.606147 | orchestrator | 2025-09-19 11:40:26.606151 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-19 11:40:26.606156 | orchestrator | Friday 19 September 2025 11:37:27 +0000 (0:00:03.452) 0:07:49.276 ****** 2025-09-19 11:40:26.606161 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606166 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.606170 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:40:26.606175 | orchestrator | 2025-09-19 11:40:26.606180 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-19 11:40:26.606185 | orchestrator | Friday 19 September 2025 11:37:30 +0000 (0:00:02.646) 0:07:51.922 ****** 2025-09-19 11:40:26.606193 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606198 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.606202 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-19 11:40:26.606207 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:40:26.606212 | orchestrator | 2025-09-19 11:40:26.606217 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-19 11:40:26.606222 | orchestrator | Friday 19 September 2025 11:37:43 +0000 (0:00:12.926) 0:08:04.849 ****** 2025-09-19 11:40:26.606226 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606231 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.606236 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.606241 | orchestrator | 2025-09-19 11:40:26.606245 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 11:40:26.606250 | orchestrator | Friday 19 September 2025 11:37:44 +0000 (0:00:00.833) 0:08:05.683 ****** 2025-09-19 11:40:26.606255 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606260 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.606265 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.606269 | orchestrator | 2025-09-19 11:40:26.606274 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-19 11:40:26.606279 | orchestrator | Friday 19 September 2025 11:37:44 +0000 (0:00:00.530) 0:08:06.214 ****** 2025-09-19 11:40:26.606284 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.606289 | orchestrator | 2025-09-19 11:40:26.606293 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-19 11:40:26.606306 | orchestrator | Friday 19 September 2025 11:37:45 +0000 (0:00:00.548) 0:08:06.762 ****** 2025-09-19 11:40:26.606311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:40:26.606316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:40:26.606320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:40:26.606325 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606330 | orchestrator | 2025-09-19 11:40:26.606335 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-19 11:40:26.606339 | orchestrator | Friday 19 September 2025 11:37:45 +0000 (0:00:00.381) 0:08:07.144 ****** 2025-09-19 11:40:26.606344 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606349 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.606354 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.606358 | orchestrator | 2025-09-19 11:40:26.606363 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-19 11:40:26.606368 | orchestrator | Friday 19 September 2025 11:37:45 +0000 (0:00:00.301) 0:08:07.446 ****** 2025-09-19 11:40:26.606373 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606377 | orchestrator | 2025-09-19 11:40:26.606382 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-19 11:40:26.606387 | orchestrator | Friday 19 September 2025 11:37:46 +0000 (0:00:00.196) 0:08:07.643 ****** 2025-09-19 11:40:26.606392 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606396 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.606401 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.606406 | orchestrator | 2025-09-19 11:40:26.606411 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-19 11:40:26.606415 | orchestrator | Friday 19 September 2025 11:37:46 +0000 (0:00:00.582) 0:08:08.225 ****** 2025-09-19 11:40:26.606420 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606425 | orchestrator | 2025-09-19 11:40:26.606432 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-19 11:40:26.606437 | orchestrator | Friday 19 September 2025 11:37:46 +0000 (0:00:00.210) 0:08:08.435 ****** 2025-09-19 11:40:26.606442 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606450 | orchestrator | 2025-09-19 11:40:26.606455 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-19 11:40:26.606459 | orchestrator | Friday 19 September 2025 11:37:47 +0000 (0:00:00.215) 0:08:08.651 ****** 2025-09-19 11:40:26.606464 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606469 | orchestrator | 2025-09-19 11:40:26.606474 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-19 11:40:26.606478 | orchestrator | Friday 19 September 2025 11:37:47 +0000 (0:00:00.119) 0:08:08.770 ****** 2025-09-19 11:40:26.606483 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606488 | orchestrator | 2025-09-19 11:40:26.606496 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-19 11:40:26.606501 | orchestrator | Friday 19 September 2025 11:37:47 +0000 (0:00:00.217) 0:08:08.988 ****** 2025-09-19 11:40:26.606506 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606510 | orchestrator | 2025-09-19 11:40:26.606515 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-19 11:40:26.606520 | orchestrator | Friday 19 September 2025 11:37:47 +0000 (0:00:00.208) 0:08:09.196 ****** 2025-09-19 11:40:26.606525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:40:26.606530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:40:26.606534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:40:26.606539 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606544 | orchestrator | 2025-09-19 11:40:26.606549 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-19 11:40:26.606554 | orchestrator | Friday 19 September 2025 11:37:47 +0000 (0:00:00.380) 0:08:09.577 ****** 2025-09-19 11:40:26.606558 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606563 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.606568 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.606573 | orchestrator | 2025-09-19 11:40:26.606577 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-19 11:40:26.606582 | orchestrator | Friday 19 September 2025 11:37:48 +0000 (0:00:00.292) 0:08:09.870 ****** 2025-09-19 11:40:26.606587 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606592 | orchestrator | 2025-09-19 11:40:26.606597 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-19 11:40:26.606601 | orchestrator | Friday 19 September 2025 11:37:49 +0000 (0:00:00.747) 0:08:10.617 ****** 2025-09-19 11:40:26.606606 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606611 | orchestrator | 2025-09-19 11:40:26.606616 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-19 11:40:26.606620 | orchestrator | 2025-09-19 11:40:26.606625 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 11:40:26.606630 | orchestrator | Friday 19 September 2025 11:37:49 +0000 (0:00:00.664) 0:08:11.282 ****** 2025-09-19 11:40:26.606635 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.606640 | orchestrator | 2025-09-19 11:40:26.606645 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 11:40:26.606650 | orchestrator | Friday 19 September 2025 11:37:50 +0000 (0:00:01.189) 0:08:12.471 ****** 2025-09-19 11:40:26.606655 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.606659 | orchestrator | 2025-09-19 11:40:26.606664 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 11:40:26.606669 | orchestrator | Friday 19 September 2025 11:37:52 +0000 (0:00:01.208) 0:08:13.680 ****** 2025-09-19 11:40:26.606674 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606679 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.606686 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.606691 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.606696 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.606700 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.606705 | orchestrator | 2025-09-19 11:40:26.606710 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 11:40:26.606715 | orchestrator | Friday 19 September 2025 11:37:53 +0000 (0:00:01.271) 0:08:14.952 ****** 2025-09-19 11:40:26.606720 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.606725 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.606729 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.606734 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.606739 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.606744 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.606749 | orchestrator | 2025-09-19 11:40:26.606753 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 11:40:26.606758 | orchestrator | Friday 19 September 2025 11:37:54 +0000 (0:00:00.751) 0:08:15.703 ****** 2025-09-19 11:40:26.606763 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.606768 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.606772 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.606777 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.606782 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.606787 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.606792 | orchestrator | 2025-09-19 11:40:26.606796 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 11:40:26.606801 | orchestrator | Friday 19 September 2025 11:37:54 +0000 (0:00:00.899) 0:08:16.603 ****** 2025-09-19 11:40:26.606806 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.606811 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.606815 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.606820 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.606825 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.606830 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.606835 | orchestrator | 2025-09-19 11:40:26.606842 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 11:40:26.606846 | orchestrator | Friday 19 September 2025 11:37:55 +0000 (0:00:00.773) 0:08:17.376 ****** 2025-09-19 11:40:26.606851 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606856 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.606861 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.606866 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.606871 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.606875 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.606880 | orchestrator | 2025-09-19 11:40:26.606885 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 11:40:26.606890 | orchestrator | Friday 19 September 2025 11:37:56 +0000 (0:00:00.962) 0:08:18.339 ****** 2025-09-19 11:40:26.606895 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606899 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.606907 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.606912 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.606917 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.606921 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.606926 | orchestrator | 2025-09-19 11:40:26.606931 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 11:40:26.606936 | orchestrator | Friday 19 September 2025 11:37:57 +0000 (0:00:00.908) 0:08:19.247 ****** 2025-09-19 11:40:26.606941 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.606946 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.606950 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.606955 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.606960 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.606965 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.606974 | orchestrator | 2025-09-19 11:40:26.606979 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 11:40:26.606983 | orchestrator | Friday 19 September 2025 11:37:58 +0000 (0:00:00.569) 0:08:19.817 ****** 2025-09-19 11:40:26.606988 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.606993 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.606998 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.607002 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.607007 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.607012 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.607017 | orchestrator | 2025-09-19 11:40:26.607022 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 11:40:26.607026 | orchestrator | Friday 19 September 2025 11:37:59 +0000 (0:00:01.303) 0:08:21.120 ****** 2025-09-19 11:40:26.607031 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.607036 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.607041 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.607045 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.607050 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.607054 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.607059 | orchestrator | 2025-09-19 11:40:26.607064 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 11:40:26.607069 | orchestrator | Friday 19 September 2025 11:38:00 +0000 (0:00:00.920) 0:08:22.041 ****** 2025-09-19 11:40:26.607074 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.607078 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.607083 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.607088 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.607093 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.607097 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.607102 | orchestrator | 2025-09-19 11:40:26.607107 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 11:40:26.607112 | orchestrator | Friday 19 September 2025 11:38:01 +0000 (0:00:00.710) 0:08:22.751 ****** 2025-09-19 11:40:26.607116 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.607121 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.607126 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.607131 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.607136 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.607140 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.607145 | orchestrator | 2025-09-19 11:40:26.607150 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 11:40:26.607155 | orchestrator | Friday 19 September 2025 11:38:01 +0000 (0:00:00.554) 0:08:23.305 ****** 2025-09-19 11:40:26.607159 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.607164 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.607169 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.607174 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.607178 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.607183 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.607188 | orchestrator | 2025-09-19 11:40:26.607193 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 11:40:26.607198 | orchestrator | Friday 19 September 2025 11:38:02 +0000 (0:00:00.863) 0:08:24.169 ****** 2025-09-19 11:40:26.607203 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.607207 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.607212 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.607217 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.607222 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.607226 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.607231 | orchestrator | 2025-09-19 11:40:26.607236 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 11:40:26.607241 | orchestrator | Friday 19 September 2025 11:38:03 +0000 (0:00:00.593) 0:08:24.762 ****** 2025-09-19 11:40:26.607246 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.607253 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.607258 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.607263 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.607268 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.607272 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.607277 | orchestrator | 2025-09-19 11:40:26.607282 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 11:40:26.607287 | orchestrator | Friday 19 September 2025 11:38:03 +0000 (0:00:00.826) 0:08:25.588 ****** 2025-09-19 11:40:26.607292 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.607304 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.607309 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.607314 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.607319 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.607325 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.607330 | orchestrator | 2025-09-19 11:40:26.607335 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 11:40:26.607340 | orchestrator | Friday 19 September 2025 11:38:04 +0000 (0:00:00.618) 0:08:26.207 ****** 2025-09-19 11:40:26.607345 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.607350 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.607354 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.607359 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:26.607364 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:26.607369 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:26.607374 | orchestrator | 2025-09-19 11:40:26.607379 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 11:40:26.607383 | orchestrator | Friday 19 September 2025 11:38:05 +0000 (0:00:00.822) 0:08:27.029 ****** 2025-09-19 11:40:26.607391 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.607396 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.607400 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.607405 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.607410 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.607415 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.607420 | orchestrator | 2025-09-19 11:40:26.607424 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 11:40:26.607429 | orchestrator | Friday 19 September 2025 11:38:06 +0000 (0:00:00.607) 0:08:27.637 ****** 2025-09-19 11:40:26.607434 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.607439 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.607444 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.607448 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.607453 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.607458 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.607462 | orchestrator | 2025-09-19 11:40:26.607467 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 11:40:26.607472 | orchestrator | Friday 19 September 2025 11:38:06 +0000 (0:00:00.812) 0:08:28.449 ****** 2025-09-19 11:40:26.607477 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.607482 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.607486 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.607491 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.607496 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.607500 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.607505 | orchestrator | 2025-09-19 11:40:26.607510 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-19 11:40:26.607515 | orchestrator | Friday 19 September 2025 11:38:08 +0000 (0:00:01.188) 0:08:29.637 ****** 2025-09-19 11:40:26.607520 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:40:26.607524 | orchestrator | 2025-09-19 11:40:26.607529 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-19 11:40:26.607534 | orchestrator | Friday 19 September 2025 11:38:12 +0000 (0:00:04.228) 0:08:33.865 ****** 2025-09-19 11:40:26.607542 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:40:26.607547 | orchestrator | 2025-09-19 11:40:26.607552 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-19 11:40:26.607556 | orchestrator | Friday 19 September 2025 11:38:14 +0000 (0:00:01.970) 0:08:35.836 ****** 2025-09-19 11:40:26.607561 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.607566 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.607571 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.607576 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.607580 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.607585 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.607590 | orchestrator | 2025-09-19 11:40:26.607595 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-19 11:40:26.607599 | orchestrator | Friday 19 September 2025 11:38:15 +0000 (0:00:01.519) 0:08:37.355 ****** 2025-09-19 11:40:26.607604 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.607609 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.607614 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.607619 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.607623 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.607628 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.607633 | orchestrator | 2025-09-19 11:40:26.607637 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-19 11:40:26.607642 | orchestrator | Friday 19 September 2025 11:38:16 +0000 (0:00:01.252) 0:08:38.608 ****** 2025-09-19 11:40:26.607647 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.607652 | orchestrator | 2025-09-19 11:40:26.607657 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-19 11:40:26.607662 | orchestrator | Friday 19 September 2025 11:38:18 +0000 (0:00:01.170) 0:08:39.779 ****** 2025-09-19 11:40:26.607667 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.607671 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.607676 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.607681 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.607685 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.607690 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.607695 | orchestrator | 2025-09-19 11:40:26.607700 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-19 11:40:26.607704 | orchestrator | Friday 19 September 2025 11:38:19 +0000 (0:00:01.484) 0:08:41.264 ****** 2025-09-19 11:40:26.607709 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.607714 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.607718 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.607724 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.607728 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.607733 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.607738 | orchestrator | 2025-09-19 11:40:26.607742 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-19 11:40:26.607747 | orchestrator | Friday 19 September 2025 11:38:23 +0000 (0:00:03.555) 0:08:44.819 ****** 2025-09-19 11:40:26.607754 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:26.607760 | orchestrator | 2025-09-19 11:40:26.607764 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-19 11:40:26.607769 | orchestrator | Friday 19 September 2025 11:38:24 +0000 (0:00:01.281) 0:08:46.100 ****** 2025-09-19 11:40:26.607777 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.607785 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.607793 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.607801 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.607814 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.607823 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.607828 | orchestrator | 2025-09-19 11:40:26.607833 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-19 11:40:26.607838 | orchestrator | Friday 19 September 2025 11:38:25 +0000 (0:00:00.624) 0:08:46.725 ****** 2025-09-19 11:40:26.607845 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.607850 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.607855 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.607860 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:26.607865 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:26.607869 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:26.607874 | orchestrator | 2025-09-19 11:40:26.607879 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-19 11:40:26.607884 | orchestrator | Friday 19 September 2025 11:38:27 +0000 (0:00:02.452) 0:08:49.177 ****** 2025-09-19 11:40:26.607889 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.607893 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.607898 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.607903 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:26.607908 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:26.607912 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:26.607917 | orchestrator | 2025-09-19 11:40:26.607922 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-19 11:40:26.607927 | orchestrator | 2025-09-19 11:40:26.607932 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 11:40:26.607937 | orchestrator | Friday 19 September 2025 11:38:28 +0000 (0:00:00.734) 0:08:49.912 ****** 2025-09-19 11:40:26.607941 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.607946 | orchestrator | 2025-09-19 11:40:26.607951 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 11:40:26.607956 | orchestrator | Friday 19 September 2025 11:38:28 +0000 (0:00:00.626) 0:08:50.539 ****** 2025-09-19 11:40:26.607961 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.607966 | orchestrator | 2025-09-19 11:40:26.607971 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 11:40:26.607976 | orchestrator | Friday 19 September 2025 11:38:29 +0000 (0:00:00.452) 0:08:50.992 ****** 2025-09-19 11:40:26.607980 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.607985 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.607990 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.607995 | orchestrator | 2025-09-19 11:40:26.608000 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 11:40:26.608005 | orchestrator | Friday 19 September 2025 11:38:29 +0000 (0:00:00.607) 0:08:51.599 ****** 2025-09-19 11:40:26.608009 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.608014 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.608019 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.608024 | orchestrator | 2025-09-19 11:40:26.608029 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 11:40:26.608033 | orchestrator | Friday 19 September 2025 11:38:30 +0000 (0:00:00.732) 0:08:52.332 ****** 2025-09-19 11:40:26.608038 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.608043 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.608048 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.608053 | orchestrator | 2025-09-19 11:40:26.608058 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 11:40:26.608063 | orchestrator | Friday 19 September 2025 11:38:31 +0000 (0:00:00.778) 0:08:53.110 ****** 2025-09-19 11:40:26.608067 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.608072 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.608077 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.608085 | orchestrator | 2025-09-19 11:40:26.608090 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 11:40:26.608095 | orchestrator | Friday 19 September 2025 11:38:32 +0000 (0:00:00.732) 0:08:53.842 ****** 2025-09-19 11:40:26.608100 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.608105 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.608109 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.608114 | orchestrator | 2025-09-19 11:40:26.608119 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 11:40:26.608124 | orchestrator | Friday 19 September 2025 11:38:32 +0000 (0:00:00.635) 0:08:54.478 ****** 2025-09-19 11:40:26.608129 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.608134 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.608139 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.608143 | orchestrator | 2025-09-19 11:40:26.608148 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 11:40:26.608153 | orchestrator | Friday 19 September 2025 11:38:33 +0000 (0:00:00.318) 0:08:54.797 ****** 2025-09-19 11:40:26.608158 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.608163 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.608167 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.608172 | orchestrator | 2025-09-19 11:40:26.608177 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 11:40:26.608182 | orchestrator | Friday 19 September 2025 11:38:33 +0000 (0:00:00.319) 0:08:55.116 ****** 2025-09-19 11:40:26.608187 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.608192 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.608196 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.608201 | orchestrator | 2025-09-19 11:40:26.608206 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 11:40:26.608213 | orchestrator | Friday 19 September 2025 11:38:34 +0000 (0:00:00.800) 0:08:55.917 ****** 2025-09-19 11:40:26.608218 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.608223 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.608228 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.608233 | orchestrator | 2025-09-19 11:40:26.608237 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 11:40:26.608242 | orchestrator | Friday 19 September 2025 11:38:35 +0000 (0:00:01.217) 0:08:57.135 ****** 2025-09-19 11:40:26.608247 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.608252 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.608257 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.608262 | orchestrator | 2025-09-19 11:40:26.608267 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 11:40:26.608272 | orchestrator | Friday 19 September 2025 11:38:35 +0000 (0:00:00.359) 0:08:57.495 ****** 2025-09-19 11:40:26.608279 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.608284 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.608288 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.608293 | orchestrator | 2025-09-19 11:40:26.608320 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 11:40:26.608325 | orchestrator | Friday 19 September 2025 11:38:36 +0000 (0:00:00.332) 0:08:57.827 ****** 2025-09-19 11:40:26.608330 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.608335 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.608339 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.608344 | orchestrator | 2025-09-19 11:40:26.608349 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 11:40:26.608354 | orchestrator | Friday 19 September 2025 11:38:36 +0000 (0:00:00.321) 0:08:58.149 ****** 2025-09-19 11:40:26.608359 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.608364 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.608369 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.608373 | orchestrator | 2025-09-19 11:40:26.608378 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 11:40:26.608387 | orchestrator | Friday 19 September 2025 11:38:37 +0000 (0:00:00.731) 0:08:58.880 ****** 2025-09-19 11:40:26.608392 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.608397 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.608401 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.608406 | orchestrator | 2025-09-19 11:40:26.608411 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 11:40:26.608416 | orchestrator | Friday 19 September 2025 11:38:37 +0000 (0:00:00.321) 0:08:59.201 ****** 2025-09-19 11:40:26.608421 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.608426 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.608431 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.608435 | orchestrator | 2025-09-19 11:40:26.608440 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 11:40:26.608445 | orchestrator | Friday 19 September 2025 11:38:37 +0000 (0:00:00.368) 0:08:59.569 ****** 2025-09-19 11:40:26.608450 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.608455 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.608460 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.608465 | orchestrator | 2025-09-19 11:40:26.608470 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 11:40:26.608474 | orchestrator | Friday 19 September 2025 11:38:38 +0000 (0:00:00.313) 0:08:59.883 ****** 2025-09-19 11:40:26.608479 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.608484 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.608489 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.608494 | orchestrator | 2025-09-19 11:40:26.608499 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 11:40:26.608503 | orchestrator | Friday 19 September 2025 11:38:38 +0000 (0:00:00.637) 0:09:00.520 ****** 2025-09-19 11:40:26.608508 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.608513 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.608518 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.608523 | orchestrator | 2025-09-19 11:40:26.608528 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 11:40:26.608533 | orchestrator | Friday 19 September 2025 11:38:39 +0000 (0:00:00.342) 0:09:00.862 ****** 2025-09-19 11:40:26.608537 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.608542 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.608547 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.608552 | orchestrator | 2025-09-19 11:40:26.608557 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-19 11:40:26.608561 | orchestrator | Friday 19 September 2025 11:38:39 +0000 (0:00:00.564) 0:09:01.427 ****** 2025-09-19 11:40:26.608566 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.608571 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.608576 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-19 11:40:26.608581 | orchestrator | 2025-09-19 11:40:26.608586 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-19 11:40:26.608590 | orchestrator | Friday 19 September 2025 11:38:40 +0000 (0:00:00.694) 0:09:02.121 ****** 2025-09-19 11:40:26.608595 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:40:26.608600 | orchestrator | 2025-09-19 11:40:26.608605 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-19 11:40:26.608610 | orchestrator | Friday 19 September 2025 11:38:42 +0000 (0:00:02.212) 0:09:04.334 ****** 2025-09-19 11:40:26.608615 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-19 11:40:26.608621 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.608626 | orchestrator | 2025-09-19 11:40:26.608631 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-19 11:40:26.608639 | orchestrator | Friday 19 September 2025 11:38:42 +0000 (0:00:00.235) 0:09:04.569 ****** 2025-09-19 11:40:26.608647 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 11:40:26.608655 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 11:40:26.608660 | orchestrator | 2025-09-19 11:40:26.608665 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-19 11:40:26.608673 | orchestrator | Friday 19 September 2025 11:38:49 +0000 (0:00:06.943) 0:09:11.512 ****** 2025-09-19 11:40:26.608678 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:40:26.608683 | orchestrator | 2025-09-19 11:40:26.608688 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-19 11:40:26.608693 | orchestrator | Friday 19 September 2025 11:38:53 +0000 (0:00:03.796) 0:09:15.309 ****** 2025-09-19 11:40:26.608698 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.608703 | orchestrator | 2025-09-19 11:40:26.608708 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-19 11:40:26.608713 | orchestrator | Friday 19 September 2025 11:38:54 +0000 (0:00:00.647) 0:09:15.956 ****** 2025-09-19 11:40:26.608717 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 11:40:26.608722 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 11:40:26.608727 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 11:40:26.608732 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-19 11:40:26.608737 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-19 11:40:26.608742 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-19 11:40:26.608746 | orchestrator | 2025-09-19 11:40:26.608751 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-19 11:40:26.608756 | orchestrator | Friday 19 September 2025 11:38:55 +0000 (0:00:01.162) 0:09:17.119 ****** 2025-09-19 11:40:26.608761 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:40:26.608766 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 11:40:26.608771 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 11:40:26.608776 | orchestrator | 2025-09-19 11:40:26.608780 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-19 11:40:26.608785 | orchestrator | Friday 19 September 2025 11:38:57 +0000 (0:00:02.201) 0:09:19.321 ****** 2025-09-19 11:40:26.608790 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 11:40:26.608795 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 11:40:26.608800 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.608805 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 11:40:26.608810 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 11:40:26.608814 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.608819 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 11:40:26.608824 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 11:40:26.608829 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.608834 | orchestrator | 2025-09-19 11:40:26.608839 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-19 11:40:26.608844 | orchestrator | Friday 19 September 2025 11:38:59 +0000 (0:00:01.335) 0:09:20.656 ****** 2025-09-19 11:40:26.608852 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.608857 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.608861 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.608866 | orchestrator | 2025-09-19 11:40:26.608870 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-19 11:40:26.608875 | orchestrator | Friday 19 September 2025 11:39:01 +0000 (0:00:02.497) 0:09:23.154 ****** 2025-09-19 11:40:26.608879 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.608884 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.608889 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.608893 | orchestrator | 2025-09-19 11:40:26.608898 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-19 11:40:26.608902 | orchestrator | Friday 19 September 2025 11:39:02 +0000 (0:00:00.612) 0:09:23.767 ****** 2025-09-19 11:40:26.608907 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.608911 | orchestrator | 2025-09-19 11:40:26.608916 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-19 11:40:26.608920 | orchestrator | Friday 19 September 2025 11:39:02 +0000 (0:00:00.543) 0:09:24.310 ****** 2025-09-19 11:40:26.608925 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.608930 | orchestrator | 2025-09-19 11:40:26.608934 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-19 11:40:26.608939 | orchestrator | Friday 19 September 2025 11:39:03 +0000 (0:00:00.773) 0:09:25.084 ****** 2025-09-19 11:40:26.608943 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.608948 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.608952 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.608957 | orchestrator | 2025-09-19 11:40:26.608962 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-19 11:40:26.608966 | orchestrator | Friday 19 September 2025 11:39:04 +0000 (0:00:01.503) 0:09:26.587 ****** 2025-09-19 11:40:26.608973 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.608977 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.608982 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.608986 | orchestrator | 2025-09-19 11:40:26.608991 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-19 11:40:26.608995 | orchestrator | Friday 19 September 2025 11:39:06 +0000 (0:00:01.370) 0:09:27.958 ****** 2025-09-19 11:40:26.609000 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.609005 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.609009 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.609014 | orchestrator | 2025-09-19 11:40:26.609018 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-19 11:40:26.609023 | orchestrator | Friday 19 September 2025 11:39:08 +0000 (0:00:01.885) 0:09:29.844 ****** 2025-09-19 11:40:26.609027 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.609034 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.609039 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.609043 | orchestrator | 2025-09-19 11:40:26.609048 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-19 11:40:26.609053 | orchestrator | Friday 19 September 2025 11:39:10 +0000 (0:00:02.315) 0:09:32.159 ****** 2025-09-19 11:40:26.609057 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.609062 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.609066 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.609071 | orchestrator | 2025-09-19 11:40:26.609075 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 11:40:26.609080 | orchestrator | Friday 19 September 2025 11:39:11 +0000 (0:00:01.253) 0:09:33.413 ****** 2025-09-19 11:40:26.609085 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.609089 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.609094 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.609101 | orchestrator | 2025-09-19 11:40:26.609106 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-19 11:40:26.609110 | orchestrator | Friday 19 September 2025 11:39:12 +0000 (0:00:01.108) 0:09:34.522 ****** 2025-09-19 11:40:26.609115 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.609119 | orchestrator | 2025-09-19 11:40:26.609124 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-19 11:40:26.609129 | orchestrator | Friday 19 September 2025 11:39:13 +0000 (0:00:00.528) 0:09:35.050 ****** 2025-09-19 11:40:26.609133 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.609138 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.609142 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.609147 | orchestrator | 2025-09-19 11:40:26.609151 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-19 11:40:26.609156 | orchestrator | Friday 19 September 2025 11:39:13 +0000 (0:00:00.296) 0:09:35.346 ****** 2025-09-19 11:40:26.609160 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.609165 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.609170 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.609174 | orchestrator | 2025-09-19 11:40:26.609179 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-19 11:40:26.609183 | orchestrator | Friday 19 September 2025 11:39:15 +0000 (0:00:01.566) 0:09:36.913 ****** 2025-09-19 11:40:26.609188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:40:26.609192 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:40:26.609197 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:40:26.609202 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.609206 | orchestrator | 2025-09-19 11:40:26.609211 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-19 11:40:26.609215 | orchestrator | Friday 19 September 2025 11:39:15 +0000 (0:00:00.640) 0:09:37.553 ****** 2025-09-19 11:40:26.609220 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.609225 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.609229 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.609234 | orchestrator | 2025-09-19 11:40:26.609238 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-19 11:40:26.609243 | orchestrator | 2025-09-19 11:40:26.609248 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 11:40:26.609252 | orchestrator | Friday 19 September 2025 11:39:16 +0000 (0:00:00.522) 0:09:38.075 ****** 2025-09-19 11:40:26.609257 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.609261 | orchestrator | 2025-09-19 11:40:26.609266 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 11:40:26.609271 | orchestrator | Friday 19 September 2025 11:39:17 +0000 (0:00:00.737) 0:09:38.813 ****** 2025-09-19 11:40:26.609275 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.609280 | orchestrator | 2025-09-19 11:40:26.609284 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 11:40:26.609289 | orchestrator | Friday 19 September 2025 11:39:17 +0000 (0:00:00.544) 0:09:39.357 ****** 2025-09-19 11:40:26.609293 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.609305 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.609310 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.609314 | orchestrator | 2025-09-19 11:40:26.609319 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 11:40:26.609324 | orchestrator | Friday 19 September 2025 11:39:18 +0000 (0:00:00.494) 0:09:39.852 ****** 2025-09-19 11:40:26.609328 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.609333 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.609340 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.609345 | orchestrator | 2025-09-19 11:40:26.609349 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 11:40:26.609354 | orchestrator | Friday 19 September 2025 11:39:18 +0000 (0:00:00.750) 0:09:40.602 ****** 2025-09-19 11:40:26.609358 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.609363 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.609372 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.609377 | orchestrator | 2025-09-19 11:40:26.609381 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 11:40:26.609386 | orchestrator | Friday 19 September 2025 11:39:19 +0000 (0:00:00.748) 0:09:41.350 ****** 2025-09-19 11:40:26.609391 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.609395 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.609399 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.609404 | orchestrator | 2025-09-19 11:40:26.609409 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 11:40:26.609413 | orchestrator | Friday 19 September 2025 11:39:20 +0000 (0:00:00.744) 0:09:42.095 ****** 2025-09-19 11:40:26.609418 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.609422 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.609427 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.609432 | orchestrator | 2025-09-19 11:40:26.609438 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 11:40:26.609443 | orchestrator | Friday 19 September 2025 11:39:21 +0000 (0:00:00.579) 0:09:42.675 ****** 2025-09-19 11:40:26.609448 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.609452 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.609457 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.609461 | orchestrator | 2025-09-19 11:40:26.609466 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 11:40:26.609471 | orchestrator | Friday 19 September 2025 11:39:21 +0000 (0:00:00.307) 0:09:42.983 ****** 2025-09-19 11:40:26.609475 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.609480 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.609485 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.609489 | orchestrator | 2025-09-19 11:40:26.609494 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 11:40:26.609498 | orchestrator | Friday 19 September 2025 11:39:21 +0000 (0:00:00.301) 0:09:43.284 ****** 2025-09-19 11:40:26.609503 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.609507 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.609512 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.609516 | orchestrator | 2025-09-19 11:40:26.609521 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 11:40:26.609526 | orchestrator | Friday 19 September 2025 11:39:22 +0000 (0:00:00.757) 0:09:44.041 ****** 2025-09-19 11:40:26.609530 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.609535 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.609539 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.609544 | orchestrator | 2025-09-19 11:40:26.609548 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 11:40:26.609553 | orchestrator | Friday 19 September 2025 11:39:23 +0000 (0:00:00.970) 0:09:45.012 ****** 2025-09-19 11:40:26.609558 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.609562 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.609567 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.609571 | orchestrator | 2025-09-19 11:40:26.609576 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 11:40:26.609580 | orchestrator | Friday 19 September 2025 11:39:23 +0000 (0:00:00.299) 0:09:45.311 ****** 2025-09-19 11:40:26.609585 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.609590 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.609594 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.609603 | orchestrator | 2025-09-19 11:40:26.609607 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 11:40:26.609612 | orchestrator | Friday 19 September 2025 11:39:24 +0000 (0:00:00.310) 0:09:45.622 ****** 2025-09-19 11:40:26.609616 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.609621 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.609626 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.609630 | orchestrator | 2025-09-19 11:40:26.609635 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 11:40:26.609639 | orchestrator | Friday 19 September 2025 11:39:24 +0000 (0:00:00.371) 0:09:45.993 ****** 2025-09-19 11:40:26.609644 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.609648 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.609653 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.609657 | orchestrator | 2025-09-19 11:40:26.609662 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 11:40:26.609667 | orchestrator | Friday 19 September 2025 11:39:24 +0000 (0:00:00.579) 0:09:46.572 ****** 2025-09-19 11:40:26.609671 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.609676 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.609680 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.609685 | orchestrator | 2025-09-19 11:40:26.609689 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 11:40:26.609694 | orchestrator | Friday 19 September 2025 11:39:25 +0000 (0:00:00.340) 0:09:46.913 ****** 2025-09-19 11:40:26.609699 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.609703 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.609708 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.609712 | orchestrator | 2025-09-19 11:40:26.609717 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 11:40:26.609722 | orchestrator | Friday 19 September 2025 11:39:25 +0000 (0:00:00.306) 0:09:47.220 ****** 2025-09-19 11:40:26.609726 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.609731 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.609735 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.609740 | orchestrator | 2025-09-19 11:40:26.609744 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 11:40:26.609749 | orchestrator | Friday 19 September 2025 11:39:25 +0000 (0:00:00.324) 0:09:47.544 ****** 2025-09-19 11:40:26.609754 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.609758 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.609763 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.609767 | orchestrator | 2025-09-19 11:40:26.609772 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 11:40:26.609777 | orchestrator | Friday 19 September 2025 11:39:26 +0000 (0:00:00.595) 0:09:48.140 ****** 2025-09-19 11:40:26.609781 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.609786 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.609790 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.609795 | orchestrator | 2025-09-19 11:40:26.609801 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 11:40:26.609806 | orchestrator | Friday 19 September 2025 11:39:26 +0000 (0:00:00.342) 0:09:48.483 ****** 2025-09-19 11:40:26.609811 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.609815 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.609820 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.609824 | orchestrator | 2025-09-19 11:40:26.609829 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-19 11:40:26.609834 | orchestrator | Friday 19 September 2025 11:39:27 +0000 (0:00:00.521) 0:09:49.005 ****** 2025-09-19 11:40:26.609838 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.609843 | orchestrator | 2025-09-19 11:40:26.609848 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-19 11:40:26.609854 | orchestrator | Friday 19 September 2025 11:39:28 +0000 (0:00:00.746) 0:09:49.751 ****** 2025-09-19 11:40:26.609863 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:40:26.609867 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 11:40:26.609872 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 11:40:26.609877 | orchestrator | 2025-09-19 11:40:26.609881 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-19 11:40:26.609886 | orchestrator | Friday 19 September 2025 11:39:30 +0000 (0:00:02.280) 0:09:52.031 ****** 2025-09-19 11:40:26.609890 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 11:40:26.609895 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 11:40:26.609899 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.609904 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 11:40:26.609909 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 11:40:26.609913 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.609918 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 11:40:26.609922 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 11:40:26.609927 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.609931 | orchestrator | 2025-09-19 11:40:26.609936 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-19 11:40:26.609941 | orchestrator | Friday 19 September 2025 11:39:31 +0000 (0:00:01.284) 0:09:53.316 ****** 2025-09-19 11:40:26.609945 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.609950 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.609954 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.609959 | orchestrator | 2025-09-19 11:40:26.609963 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-19 11:40:26.609968 | orchestrator | Friday 19 September 2025 11:39:32 +0000 (0:00:00.338) 0:09:53.654 ****** 2025-09-19 11:40:26.609972 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.609977 | orchestrator | 2025-09-19 11:40:26.609982 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-19 11:40:26.609986 | orchestrator | Friday 19 September 2025 11:39:32 +0000 (0:00:00.717) 0:09:54.372 ****** 2025-09-19 11:40:26.609991 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 11:40:26.609996 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 11:40:26.610000 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 11:40:26.610005 | orchestrator | 2025-09-19 11:40:26.610010 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-19 11:40:26.610029 | orchestrator | Friday 19 September 2025 11:39:33 +0000 (0:00:00.825) 0:09:55.198 ****** 2025-09-19 11:40:26.610034 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:40:26.610039 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 11:40:26.610044 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:40:26.610048 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 11:40:26.610053 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:40:26.610058 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 11:40:26.610062 | orchestrator | 2025-09-19 11:40:26.610070 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-19 11:40:26.610075 | orchestrator | Friday 19 September 2025 11:39:38 +0000 (0:00:04.616) 0:09:59.815 ****** 2025-09-19 11:40:26.610079 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:40:26.610084 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 11:40:26.610088 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:40:26.610093 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 11:40:26.610098 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:40:26.610104 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 11:40:26.610109 | orchestrator | 2025-09-19 11:40:26.610114 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-19 11:40:26.610118 | orchestrator | Friday 19 September 2025 11:39:40 +0000 (0:00:02.750) 0:10:02.565 ****** 2025-09-19 11:40:26.610123 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 11:40:26.610127 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.610132 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 11:40:26.610137 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.610141 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 11:40:26.610146 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.610150 | orchestrator | 2025-09-19 11:40:26.610155 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-19 11:40:26.610162 | orchestrator | Friday 19 September 2025 11:39:42 +0000 (0:00:01.269) 0:10:03.835 ****** 2025-09-19 11:40:26.610167 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-19 11:40:26.610172 | orchestrator | 2025-09-19 11:40:26.610176 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-19 11:40:26.610181 | orchestrator | Friday 19 September 2025 11:39:42 +0000 (0:00:00.234) 0:10:04.069 ****** 2025-09-19 11:40:26.610186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:40:26.610191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:40:26.610195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:40:26.610200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:40:26.610205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:40:26.610209 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.610214 | orchestrator | 2025-09-19 11:40:26.610219 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-19 11:40:26.610223 | orchestrator | Friday 19 September 2025 11:39:43 +0000 (0:00:00.574) 0:10:04.644 ****** 2025-09-19 11:40:26.610228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:40:26.610232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:40:26.610237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:40:26.610241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:40:26.610246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:40:26.610254 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.610259 | orchestrator | 2025-09-19 11:40:26.610264 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-19 11:40:26.610268 | orchestrator | Friday 19 September 2025 11:39:43 +0000 (0:00:00.558) 0:10:05.203 ****** 2025-09-19 11:40:26.610273 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 11:40:26.610278 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 11:40:26.610282 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 11:40:26.610287 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 11:40:26.610292 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 11:40:26.610303 | orchestrator | 2025-09-19 11:40:26.610308 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-19 11:40:26.610313 | orchestrator | Friday 19 September 2025 11:40:12 +0000 (0:00:28.453) 0:10:33.656 ****** 2025-09-19 11:40:26.610317 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.610322 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.610326 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.610331 | orchestrator | 2025-09-19 11:40:26.610335 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-19 11:40:26.610340 | orchestrator | Friday 19 September 2025 11:40:12 +0000 (0:00:00.300) 0:10:33.956 ****** 2025-09-19 11:40:26.610344 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.610349 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.610353 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.610358 | orchestrator | 2025-09-19 11:40:26.610365 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-19 11:40:26.610370 | orchestrator | Friday 19 September 2025 11:40:12 +0000 (0:00:00.562) 0:10:34.519 ****** 2025-09-19 11:40:26.610374 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.610379 | orchestrator | 2025-09-19 11:40:26.610384 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-19 11:40:26.610388 | orchestrator | Friday 19 September 2025 11:40:13 +0000 (0:00:00.519) 0:10:35.039 ****** 2025-09-19 11:40:26.610393 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.610397 | orchestrator | 2025-09-19 11:40:26.610404 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-19 11:40:26.610409 | orchestrator | Friday 19 September 2025 11:40:14 +0000 (0:00:00.651) 0:10:35.690 ****** 2025-09-19 11:40:26.610413 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.610418 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.610422 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.610427 | orchestrator | 2025-09-19 11:40:26.610431 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-19 11:40:26.610436 | orchestrator | Friday 19 September 2025 11:40:15 +0000 (0:00:01.378) 0:10:37.069 ****** 2025-09-19 11:40:26.610441 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.610445 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.610450 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.610454 | orchestrator | 2025-09-19 11:40:26.610459 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-19 11:40:26.610467 | orchestrator | Friday 19 September 2025 11:40:16 +0000 (0:00:01.198) 0:10:38.267 ****** 2025-09-19 11:40:26.610471 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:40:26.610476 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:40:26.610480 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:40:26.610485 | orchestrator | 2025-09-19 11:40:26.610489 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-19 11:40:26.610494 | orchestrator | Friday 19 September 2025 11:40:18 +0000 (0:00:01.997) 0:10:40.264 ****** 2025-09-19 11:40:26.610499 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 11:40:26.610503 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 11:40:26.610508 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 11:40:26.610512 | orchestrator | 2025-09-19 11:40:26.610517 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 11:40:26.610522 | orchestrator | Friday 19 September 2025 11:40:21 +0000 (0:00:02.487) 0:10:42.752 ****** 2025-09-19 11:40:26.610526 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.610531 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.610535 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.610540 | orchestrator | 2025-09-19 11:40:26.610544 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-19 11:40:26.610549 | orchestrator | Friday 19 September 2025 11:40:21 +0000 (0:00:00.271) 0:10:43.023 ****** 2025-09-19 11:40:26.610553 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:40:26.610558 | orchestrator | 2025-09-19 11:40:26.610563 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-19 11:40:26.610567 | orchestrator | Friday 19 September 2025 11:40:22 +0000 (0:00:00.645) 0:10:43.669 ****** 2025-09-19 11:40:26.610572 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.610576 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.610581 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.610585 | orchestrator | 2025-09-19 11:40:26.610590 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-19 11:40:26.610594 | orchestrator | Friday 19 September 2025 11:40:22 +0000 (0:00:00.279) 0:10:43.948 ****** 2025-09-19 11:40:26.610599 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.610603 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:40:26.610608 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:40:26.610612 | orchestrator | 2025-09-19 11:40:26.610617 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-19 11:40:26.610622 | orchestrator | Friday 19 September 2025 11:40:22 +0000 (0:00:00.302) 0:10:44.250 ****** 2025-09-19 11:40:26.610626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:40:26.610631 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:40:26.610635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:40:26.610640 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:40:26.610645 | orchestrator | 2025-09-19 11:40:26.610649 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-19 11:40:26.610654 | orchestrator | Friday 19 September 2025 11:40:23 +0000 (0:00:01.127) 0:10:45.378 ****** 2025-09-19 11:40:26.610658 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:40:26.610663 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:40:26.610667 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:40:26.610672 | orchestrator | 2025-09-19 11:40:26.610677 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:40:26.610681 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-19 11:40:26.610689 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-19 11:40:26.610696 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-19 11:40:26.610701 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-19 11:40:26.610706 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-19 11:40:26.610712 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-19 11:40:26.610717 | orchestrator | 2025-09-19 11:40:26.610722 | orchestrator | 2025-09-19 11:40:26.610726 | orchestrator | 2025-09-19 11:40:26.610731 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:40:26.610735 | orchestrator | Friday 19 September 2025 11:40:24 +0000 (0:00:00.243) 0:10:45.621 ****** 2025-09-19 11:40:26.610740 | orchestrator | =============================================================================== 2025-09-19 11:40:26.610744 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 42.23s 2025-09-19 11:40:26.610749 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 37.75s 2025-09-19 11:40:26.610753 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 37.01s 2025-09-19 11:40:26.610758 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 28.45s 2025-09-19 11:40:26.610762 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.96s 2025-09-19 11:40:26.610767 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.94s 2025-09-19 11:40:26.610771 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.93s 2025-09-19 11:40:26.610776 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.62s 2025-09-19 11:40:26.610780 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.10s 2025-09-19 11:40:26.610785 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.98s 2025-09-19 11:40:26.610789 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.94s 2025-09-19 11:40:26.610794 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.34s 2025-09-19 11:40:26.610798 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.75s 2025-09-19 11:40:26.610803 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.62s 2025-09-19 11:40:26.610807 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.23s 2025-09-19 11:40:26.610812 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.80s 2025-09-19 11:40:26.610817 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.65s 2025-09-19 11:40:26.610821 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.61s 2025-09-19 11:40:26.610826 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.56s 2025-09-19 11:40:26.610830 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.45s 2025-09-19 11:40:26.610835 | orchestrator | 2025-09-19 11:40:26 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:26.610840 | orchestrator | 2025-09-19 11:40:26 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:40:26.610844 | orchestrator | 2025-09-19 11:40:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:29.635830 | orchestrator | 2025-09-19 11:40:29 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:29.637695 | orchestrator | 2025-09-19 11:40:29 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:29.639653 | orchestrator | 2025-09-19 11:40:29 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:40:29.639689 | orchestrator | 2025-09-19 11:40:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:32.683813 | orchestrator | 2025-09-19 11:40:32 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:32.684868 | orchestrator | 2025-09-19 11:40:32 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:32.685845 | orchestrator | 2025-09-19 11:40:32 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:40:32.685877 | orchestrator | 2025-09-19 11:40:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:35.730202 | orchestrator | 2025-09-19 11:40:35 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:35.731934 | orchestrator | 2025-09-19 11:40:35 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:35.734631 | orchestrator | 2025-09-19 11:40:35 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:40:35.734807 | orchestrator | 2025-09-19 11:40:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:38.776819 | orchestrator | 2025-09-19 11:40:38 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:38.778795 | orchestrator | 2025-09-19 11:40:38 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:38.781069 | orchestrator | 2025-09-19 11:40:38 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:40:38.781094 | orchestrator | 2025-09-19 11:40:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:41.827787 | orchestrator | 2025-09-19 11:40:41 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:41.829939 | orchestrator | 2025-09-19 11:40:41 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:41.831836 | orchestrator | 2025-09-19 11:40:41 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:40:41.832064 | orchestrator | 2025-09-19 11:40:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:44.872771 | orchestrator | 2025-09-19 11:40:44 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:44.874908 | orchestrator | 2025-09-19 11:40:44 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:44.876036 | orchestrator | 2025-09-19 11:40:44 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:40:44.876544 | orchestrator | 2025-09-19 11:40:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:47.925490 | orchestrator | 2025-09-19 11:40:47 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:47.926345 | orchestrator | 2025-09-19 11:40:47 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:47.929328 | orchestrator | 2025-09-19 11:40:47 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:40:47.929367 | orchestrator | 2025-09-19 11:40:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:50.967329 | orchestrator | 2025-09-19 11:40:50 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:50.967446 | orchestrator | 2025-09-19 11:40:50 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:50.968453 | orchestrator | 2025-09-19 11:40:50 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:40:50.968488 | orchestrator | 2025-09-19 11:40:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:54.007227 | orchestrator | 2025-09-19 11:40:54 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:54.009208 | orchestrator | 2025-09-19 11:40:54 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:54.011016 | orchestrator | 2025-09-19 11:40:54 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:40:54.011040 | orchestrator | 2025-09-19 11:40:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:57.050448 | orchestrator | 2025-09-19 11:40:57 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:40:57.052730 | orchestrator | 2025-09-19 11:40:57 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:40:57.053785 | orchestrator | 2025-09-19 11:40:57 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:40:57.053833 | orchestrator | 2025-09-19 11:40:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:00.098249 | orchestrator | 2025-09-19 11:41:00 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:41:00.100805 | orchestrator | 2025-09-19 11:41:00 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:41:00.104560 | orchestrator | 2025-09-19 11:41:00 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:00.104596 | orchestrator | 2025-09-19 11:41:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:03.155244 | orchestrator | 2025-09-19 11:41:03 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:41:03.156642 | orchestrator | 2025-09-19 11:41:03 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:41:03.158692 | orchestrator | 2025-09-19 11:41:03 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:03.159272 | orchestrator | 2025-09-19 11:41:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:06.208075 | orchestrator | 2025-09-19 11:41:06 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:41:06.209854 | orchestrator | 2025-09-19 11:41:06 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:41:06.211676 | orchestrator | 2025-09-19 11:41:06 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:06.211777 | orchestrator | 2025-09-19 11:41:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:09.263026 | orchestrator | 2025-09-19 11:41:09 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:41:09.264085 | orchestrator | 2025-09-19 11:41:09 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:41:09.266581 | orchestrator | 2025-09-19 11:41:09 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:09.266635 | orchestrator | 2025-09-19 11:41:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:12.302404 | orchestrator | 2025-09-19 11:41:12 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:41:12.304185 | orchestrator | 2025-09-19 11:41:12 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state STARTED 2025-09-19 11:41:12.306304 | orchestrator | 2025-09-19 11:41:12 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:12.306859 | orchestrator | 2025-09-19 11:41:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:15.342652 | orchestrator | 2025-09-19 11:41:15 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:41:15.345365 | orchestrator | 2025-09-19 11:41:15 | INFO  | Task bbdfc8aa-b4be-41dc-b20a-4f45bee0eb46 is in state SUCCESS 2025-09-19 11:41:15.346823 | orchestrator | 2025-09-19 11:41:15.346869 | orchestrator | 2025-09-19 11:41:15.346882 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:41:15.346894 | orchestrator | 2025-09-19 11:41:15.346905 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:41:15.346916 | orchestrator | Friday 19 September 2025 11:38:25 +0000 (0:00:00.410) 0:00:00.410 ****** 2025-09-19 11:41:15.346949 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:41:15.346969 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:41:15.346988 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:41:15.347013 | orchestrator | 2025-09-19 11:41:15.347034 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:41:15.347051 | orchestrator | Friday 19 September 2025 11:38:25 +0000 (0:00:00.359) 0:00:00.769 ****** 2025-09-19 11:41:15.347068 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-19 11:41:15.347085 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-19 11:41:15.347103 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-19 11:41:15.347120 | orchestrator | 2025-09-19 11:41:15.347137 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-19 11:41:15.347154 | orchestrator | 2025-09-19 11:41:15.347173 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 11:41:15.347192 | orchestrator | Friday 19 September 2025 11:38:26 +0000 (0:00:00.329) 0:00:01.099 ****** 2025-09-19 11:41:15.347211 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:41:15.347250 | orchestrator | 2025-09-19 11:41:15.347270 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-19 11:41:15.347346 | orchestrator | Friday 19 September 2025 11:38:26 +0000 (0:00:00.460) 0:00:01.560 ****** 2025-09-19 11:41:15.347368 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 11:41:15.347387 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 11:41:15.347406 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 11:41:15.347439 | orchestrator | 2025-09-19 11:41:15.347452 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-19 11:41:15.347465 | orchestrator | Friday 19 September 2025 11:38:28 +0000 (0:00:01.644) 0:00:03.204 ****** 2025-09-19 11:41:15.347499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:41:15.347535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:41:15.347590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:41:15.347608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:41:15.347624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:41:15.347644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:41:15.347668 | orchestrator | 2025-09-19 11:41:15.347681 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 11:41:15.347694 | orchestrator | Friday 19 September 2025 11:38:30 +0000 (0:00:01.792) 0:00:04.997 ****** 2025-09-19 11:41:15.347706 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:41:15.347718 | orchestrator | 2025-09-19 11:41:15.347730 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-19 11:41:15.347742 | orchestrator | Friday 19 September 2025 11:38:30 +0000 (0:00:00.633) 0:00:05.631 ****** 2025-09-19 11:41:15.347764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:41:15.347791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:41:15.347803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:41:15.347820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:41:15.347851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:41:15.347864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:41:15.347876 | orchestrator | 2025-09-19 11:41:15.347887 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-19 11:41:15.347898 | orchestrator | Friday 19 September 2025 11:38:33 +0000 (0:00:03.038) 0:00:08.670 ****** 2025-09-19 11:41:15.347910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:41:15.347930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:41:15.347942 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:41:15.347954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:41:15.348073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:41:15.348104 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:15.348124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:41:15.348181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:41:15.348224 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:15.348249 | orchestrator | 2025-09-19 11:41:15.348268 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-19 11:41:15.348328 | orchestrator | Friday 19 September 2025 11:38:35 +0000 (0:00:01.198) 0:00:09.868 ****** 2025-09-19 11:41:15.348348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:41:15.348420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:41:15.348442 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:41:15.348461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:41:15.348511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:41:15.348532 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:15.348550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:41:15.348582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:41:15.348602 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:15.348630 | orchestrator | 2025-09-19 11:41:15.348656 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-19 11:41:15.348681 | orchestrator | Friday 19 September 2025 11:38:36 +0000 (0:00:01.243) 0:00:11.112 ****** 2025-09-19 11:41:15.348723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:41:15.348760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:41:15.348779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:41:15.348812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:41:15.348833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:41:15.348873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:41:15.348894 | orchestrator | 2025-09-19 11:41:15.348913 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-19 11:41:15.348932 | orchestrator | Friday 19 September 2025 11:38:39 +0000 (0:00:02.798) 0:00:13.911 ****** 2025-09-19 11:41:15.348951 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:15.348970 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:41:15.348989 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:41:15.349007 | orchestrator | 2025-09-19 11:41:15.349026 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-19 11:41:15.349043 | orchestrator | Friday 19 September 2025 11:38:41 +0000 (0:00:02.721) 0:00:16.632 ****** 2025-09-19 11:41:15.349062 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:15.349100 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:41:15.349122 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:41:15.349141 | orchestrator | 2025-09-19 11:41:15.349159 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-19 11:41:15.349178 | orchestrator | Friday 19 September 2025 11:38:44 +0000 (0:00:02.517) 0:00:19.149 ****** 2025-09-19 11:41:15.349197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:41:15.349231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:41:15.349265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:41:15.349323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:41:15.349346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:41:15.349380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:41:15.349430 | orchestrator | 2025-09-19 11:41:15.349451 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 11:41:15.349477 | orchestrator | Friday 19 September 2025 11:38:46 +0000 (0:00:01.784) 0:00:20.934 ****** 2025-09-19 11:41:15.349498 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:41:15.349515 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:15.349533 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:15.349550 | orchestrator | 2025-09-19 11:41:15.349568 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 11:41:15.349586 | orchestrator | Friday 19 September 2025 11:38:46 +0000 (0:00:00.289) 0:00:21.223 ****** 2025-09-19 11:41:15.349604 | orchestrator | 2025-09-19 11:41:15.349622 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 11:41:15.349649 | orchestrator | Friday 19 September 2025 11:38:46 +0000 (0:00:00.061) 0:00:21.285 ****** 2025-09-19 11:41:15.349670 | orchestrator | 2025-09-19 11:41:15.349688 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 11:41:15.349706 | orchestrator | Friday 19 September 2025 11:38:46 +0000 (0:00:00.068) 0:00:21.354 ****** 2025-09-19 11:41:15.349746 | orchestrator | 2025-09-19 11:41:15.349774 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-19 11:41:15.349793 | orchestrator | Friday 19 September 2025 11:38:46 +0000 (0:00:00.068) 0:00:21.422 ****** 2025-09-19 11:41:15.349812 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:41:15.349829 | orchestrator | 2025-09-19 11:41:15.349848 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-19 11:41:15.349868 | orchestrator | Friday 19 September 2025 11:38:46 +0000 (0:00:00.304) 0:00:21.727 ****** 2025-09-19 11:41:15.349885 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:41:15.349904 | orchestrator | 2025-09-19 11:41:15.349915 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-19 11:41:15.349926 | orchestrator | Friday 19 September 2025 11:38:47 +0000 (0:00:00.617) 0:00:22.344 ****** 2025-09-19 11:41:15.349937 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:15.349948 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:41:15.349958 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:41:15.349969 | orchestrator | 2025-09-19 11:41:15.349980 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-19 11:41:15.349990 | orchestrator | Friday 19 September 2025 11:39:43 +0000 (0:00:55.577) 0:01:17.922 ****** 2025-09-19 11:41:15.350001 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:15.350012 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:41:15.350118 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:41:15.350138 | orchestrator | 2025-09-19 11:41:15.350170 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 11:41:15.350195 | orchestrator | Friday 19 September 2025 11:41:01 +0000 (0:01:18.688) 0:02:36.610 ****** 2025-09-19 11:41:15.350213 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:41:15.350231 | orchestrator | 2025-09-19 11:41:15.350249 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-19 11:41:15.350266 | orchestrator | Friday 19 September 2025 11:41:02 +0000 (0:00:00.474) 0:02:37.084 ****** 2025-09-19 11:41:15.350391 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:41:15.350410 | orchestrator | 2025-09-19 11:41:15.350427 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-19 11:41:15.350444 | orchestrator | Friday 19 September 2025 11:41:05 +0000 (0:00:02.725) 0:02:39.810 ****** 2025-09-19 11:41:15.350461 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:41:15.350477 | orchestrator | 2025-09-19 11:41:15.350493 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-19 11:41:15.350510 | orchestrator | Friday 19 September 2025 11:41:07 +0000 (0:00:02.288) 0:02:42.098 ****** 2025-09-19 11:41:15.350576 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:15.350593 | orchestrator | 2025-09-19 11:41:15.350609 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-19 11:41:15.350626 | orchestrator | Friday 19 September 2025 11:41:09 +0000 (0:00:02.669) 0:02:44.768 ****** 2025-09-19 11:41:15.350642 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:15.350658 | orchestrator | 2025-09-19 11:41:15.350690 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:41:15.350708 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:41:15.350726 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 11:41:15.350742 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 11:41:15.350758 | orchestrator | 2025-09-19 11:41:15.350775 | orchestrator | 2025-09-19 11:41:15.350790 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:41:15.350820 | orchestrator | Friday 19 September 2025 11:41:12 +0000 (0:00:02.608) 0:02:47.376 ****** 2025-09-19 11:41:15.350837 | orchestrator | =============================================================================== 2025-09-19 11:41:15.350852 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 78.69s 2025-09-19 11:41:15.350868 | orchestrator | opensearch : Restart opensearch container ------------------------------ 55.58s 2025-09-19 11:41:15.350884 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.04s 2025-09-19 11:41:15.350899 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.80s 2025-09-19 11:41:15.350914 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.73s 2025-09-19 11:41:15.350946 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.72s 2025-09-19 11:41:15.350963 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.67s 2025-09-19 11:41:15.350978 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.61s 2025-09-19 11:41:15.350993 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.52s 2025-09-19 11:41:15.351009 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.29s 2025-09-19 11:41:15.351025 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.79s 2025-09-19 11:41:15.351041 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.78s 2025-09-19 11:41:15.351058 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.64s 2025-09-19 11:41:15.351077 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.24s 2025-09-19 11:41:15.351100 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.20s 2025-09-19 11:41:15.351116 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.63s 2025-09-19 11:41:15.351133 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.62s 2025-09-19 11:41:15.351149 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2025-09-19 11:41:15.351166 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2025-09-19 11:41:15.351182 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-09-19 11:41:15.351198 | orchestrator | 2025-09-19 11:41:15 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:15.351230 | orchestrator | 2025-09-19 11:41:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:18.395330 | orchestrator | 2025-09-19 11:41:18 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:41:18.396265 | orchestrator | 2025-09-19 11:41:18 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:18.396308 | orchestrator | 2025-09-19 11:41:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:21.439873 | orchestrator | 2025-09-19 11:41:21 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:41:21.441802 | orchestrator | 2025-09-19 11:41:21 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:21.441958 | orchestrator | 2025-09-19 11:41:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:24.482706 | orchestrator | 2025-09-19 11:41:24 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:41:24.484145 | orchestrator | 2025-09-19 11:41:24 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:24.484319 | orchestrator | 2025-09-19 11:41:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:27.520157 | orchestrator | 2025-09-19 11:41:27 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:41:27.522235 | orchestrator | 2025-09-19 11:41:27 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:27.522265 | orchestrator | 2025-09-19 11:41:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:30.573732 | orchestrator | 2025-09-19 11:41:30 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state STARTED 2025-09-19 11:41:30.577217 | orchestrator | 2025-09-19 11:41:30 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:30.577355 | orchestrator | 2025-09-19 11:41:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:33.634351 | orchestrator | 2025-09-19 11:41:33 | INFO  | Task ff9c42fc-193e-44b0-af59-a9163cf2ad70 is in state SUCCESS 2025-09-19 11:41:33.635542 | orchestrator | 2025-09-19 11:41:33.635581 | orchestrator | 2025-09-19 11:41:33.635594 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-19 11:41:33.635606 | orchestrator | 2025-09-19 11:41:33.635618 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-19 11:41:33.635629 | orchestrator | Friday 19 September 2025 11:38:25 +0000 (0:00:00.102) 0:00:00.102 ****** 2025-09-19 11:41:33.635641 | orchestrator | ok: [localhost] => { 2025-09-19 11:41:33.635653 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-19 11:41:33.635664 | orchestrator | } 2025-09-19 11:41:33.635675 | orchestrator | 2025-09-19 11:41:33.635687 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-19 11:41:33.635698 | orchestrator | Friday 19 September 2025 11:38:25 +0000 (0:00:00.045) 0:00:00.147 ****** 2025-09-19 11:41:33.635709 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-19 11:41:33.635721 | orchestrator | ...ignoring 2025-09-19 11:41:33.635733 | orchestrator | 2025-09-19 11:41:33.635744 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-19 11:41:33.635755 | orchestrator | Friday 19 September 2025 11:38:28 +0000 (0:00:02.909) 0:00:03.057 ****** 2025-09-19 11:41:33.635766 | orchestrator | skipping: [localhost] 2025-09-19 11:41:33.635777 | orchestrator | 2025-09-19 11:41:33.635787 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-19 11:41:33.635798 | orchestrator | Friday 19 September 2025 11:38:28 +0000 (0:00:00.050) 0:00:03.108 ****** 2025-09-19 11:41:33.635809 | orchestrator | ok: [localhost] 2025-09-19 11:41:33.635820 | orchestrator | 2025-09-19 11:41:33.635831 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:41:33.635865 | orchestrator | 2025-09-19 11:41:33.635885 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:41:33.635904 | orchestrator | Friday 19 September 2025 11:38:28 +0000 (0:00:00.127) 0:00:03.236 ****** 2025-09-19 11:41:33.635924 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:41:33.635942 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:41:33.635961 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:41:33.635981 | orchestrator | 2025-09-19 11:41:33.636000 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:41:33.636017 | orchestrator | Friday 19 September 2025 11:38:28 +0000 (0:00:00.265) 0:00:03.502 ****** 2025-09-19 11:41:33.636029 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-19 11:41:33.636040 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-19 11:41:33.636051 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-19 11:41:33.636061 | orchestrator | 2025-09-19 11:41:33.636072 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-19 11:41:33.636083 | orchestrator | 2025-09-19 11:41:33.636094 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-19 11:41:33.636104 | orchestrator | Friday 19 September 2025 11:38:29 +0000 (0:00:00.711) 0:00:04.213 ****** 2025-09-19 11:41:33.636116 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 11:41:33.636128 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-19 11:41:33.636141 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-19 11:41:33.636153 | orchestrator | 2025-09-19 11:41:33.636166 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 11:41:33.636179 | orchestrator | Friday 19 September 2025 11:38:29 +0000 (0:00:00.360) 0:00:04.574 ****** 2025-09-19 11:41:33.636191 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:41:33.636204 | orchestrator | 2025-09-19 11:41:33.636217 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-19 11:41:33.636229 | orchestrator | Friday 19 September 2025 11:38:30 +0000 (0:00:00.538) 0:00:05.113 ****** 2025-09-19 11:41:33.636294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:41:33.636328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:41:33.636349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:41:33.636363 | orchestrator | 2025-09-19 11:41:33.636382 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-19 11:41:33.636395 | orchestrator | Friday 19 September 2025 11:38:33 +0000 (0:00:03.440) 0:00:08.554 ****** 2025-09-19 11:41:33.636407 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.636420 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:33.636438 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.636451 | orchestrator | 2025-09-19 11:41:33.636463 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-19 11:41:33.636475 | orchestrator | Friday 19 September 2025 11:38:34 +0000 (0:00:00.745) 0:00:09.299 ****** 2025-09-19 11:41:33.636487 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.636500 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.636511 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:33.636522 | orchestrator | 2025-09-19 11:41:33.636533 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-19 11:41:33.636543 | orchestrator | Friday 19 September 2025 11:38:36 +0000 (0:00:01.663) 0:00:10.962 ****** 2025-09-19 11:41:33.636555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:41:33.636579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:41:33.636599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:41:33.636611 | orchestrator | 2025-09-19 11:41:33.636622 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-19 11:41:33.636633 | orchestrator | Friday 19 September 2025 11:38:40 +0000 (0:00:03.946) 0:00:14.909 ****** 2025-09-19 11:41:33.636644 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.636654 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.636665 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:33.636676 | orchestrator | 2025-09-19 11:41:33.636687 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-19 11:41:33.636698 | orchestrator | Friday 19 September 2025 11:38:41 +0000 (0:00:01.349) 0:00:16.258 ****** 2025-09-19 11:41:33.636708 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:41:33.636719 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:33.636730 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:41:33.636741 | orchestrator | 2025-09-19 11:41:33.636751 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 11:41:33.636767 | orchestrator | Friday 19 September 2025 11:38:45 +0000 (0:00:04.089) 0:00:20.348 ****** 2025-09-19 11:41:33.636778 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:41:33.636789 | orchestrator | 2025-09-19 11:41:33.636808 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-19 11:41:33.636823 | orchestrator | Friday 19 September 2025 11:38:46 +0000 (0:00:00.504) 0:00:20.853 ****** 2025-09-19 11:41:33.636844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:41:33.636864 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:41:33.636876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:41:33.636888 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.636911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:41:33.636932 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.637001 | orchestrator | 2025-09-19 11:41:33.637126 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-19 11:41:33.637139 | orchestrator | Friday 19 September 2025 11:38:48 +0000 (0:00:02.547) 0:00:23.400 ****** 2025-09-19 11:41:33.637151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:41:33.637164 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.637190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:41:33.637212 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:41:33.637224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:41:33.637236 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.637247 | orchestrator | 2025-09-19 11:41:33.637258 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-19 11:41:33.637289 | orchestrator | Friday 19 September 2025 11:38:51 +0000 (0:00:02.651) 0:00:26.051 ****** 2025-09-19 11:41:33.637306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:41:33.637330 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.637350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:41:33.637363 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.637378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:41:33.637398 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:41:33.637409 | orchestrator | 2025-09-19 11:41:33.637419 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-19 11:41:33.637430 | orchestrator | Friday 19 September 2025 11:38:53 +0000 (0:00:02.708) 0:00:28.760 ****** 2025-09-19 11:41:33.637450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:41:33.637468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:41:33.637496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:41:33.637509 | orchestrator | 2025-09-19 11:41:33.637520 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-19 11:41:33.637530 | orchestrator | Friday 19 September 2025 11:38:57 +0000 (0:00:03.420) 0:00:32.181 ****** 2025-09-19 11:41:33.637541 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:33.637552 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:41:33.637563 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:41:33.637573 | orchestrator | 2025-09-19 11:41:33.637584 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-19 11:41:33.637595 | orchestrator | Friday 19 September 2025 11:38:58 +0000 (0:00:00.863) 0:00:33.044 ****** 2025-09-19 11:41:33.637605 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:41:33.637616 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:41:33.637626 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:41:33.637637 | orchestrator | 2025-09-19 11:41:33.637648 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-19 11:41:33.637659 | orchestrator | Friday 19 September 2025 11:38:58 +0000 (0:00:00.556) 0:00:33.600 ****** 2025-09-19 11:41:33.637669 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:41:33.637680 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:41:33.637691 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:41:33.637701 | orchestrator | 2025-09-19 11:41:33.637712 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-19 11:41:33.637731 | orchestrator | Friday 19 September 2025 11:38:59 +0000 (0:00:00.338) 0:00:33.939 ****** 2025-09-19 11:41:33.637743 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-19 11:41:33.637755 | orchestrator | ...ignoring 2025-09-19 11:41:33.637768 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-19 11:41:33.637780 | orchestrator | ...ignoring 2025-09-19 11:41:33.637793 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-19 11:41:33.637805 | orchestrator | ...ignoring 2025-09-19 11:41:33.637823 | orchestrator | 2025-09-19 11:41:33.637841 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-19 11:41:33.637859 | orchestrator | Friday 19 September 2025 11:39:10 +0000 (0:00:11.060) 0:00:45.000 ****** 2025-09-19 11:41:33.637871 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:41:33.637928 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:41:33.637942 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:41:33.638102 | orchestrator | 2025-09-19 11:41:33.638117 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-19 11:41:33.638128 | orchestrator | Friday 19 September 2025 11:39:10 +0000 (0:00:00.422) 0:00:45.422 ****** 2025-09-19 11:41:33.638139 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:41:33.638150 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.638161 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.638171 | orchestrator | 2025-09-19 11:41:33.638182 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-19 11:41:33.638193 | orchestrator | Friday 19 September 2025 11:39:11 +0000 (0:00:00.637) 0:00:46.060 ****** 2025-09-19 11:41:33.638203 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:41:33.638214 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.638224 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.638235 | orchestrator | 2025-09-19 11:41:33.638246 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-19 11:41:33.638256 | orchestrator | Friday 19 September 2025 11:39:11 +0000 (0:00:00.437) 0:00:46.497 ****** 2025-09-19 11:41:33.638267 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:41:33.638296 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.638307 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.638317 | orchestrator | 2025-09-19 11:41:33.638328 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-19 11:41:33.638338 | orchestrator | Friday 19 September 2025 11:39:12 +0000 (0:00:00.438) 0:00:46.936 ****** 2025-09-19 11:41:33.638349 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:41:33.638360 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:41:33.638370 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:41:33.638381 | orchestrator | 2025-09-19 11:41:33.638392 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-19 11:41:33.638402 | orchestrator | Friday 19 September 2025 11:39:12 +0000 (0:00:00.430) 0:00:47.366 ****** 2025-09-19 11:41:33.638422 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:41:33.638433 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.638443 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.638454 | orchestrator | 2025-09-19 11:41:33.638465 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 11:41:33.638475 | orchestrator | Friday 19 September 2025 11:39:13 +0000 (0:00:00.843) 0:00:48.210 ****** 2025-09-19 11:41:33.638486 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.638497 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.638508 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-19 11:41:33.638518 | orchestrator | 2025-09-19 11:41:33.638529 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-19 11:41:33.638549 | orchestrator | Friday 19 September 2025 11:39:13 +0000 (0:00:00.413) 0:00:48.624 ****** 2025-09-19 11:41:33.638560 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:33.638570 | orchestrator | 2025-09-19 11:41:33.638581 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-19 11:41:33.638592 | orchestrator | Friday 19 September 2025 11:39:24 +0000 (0:00:10.330) 0:00:58.954 ****** 2025-09-19 11:41:33.638602 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:41:33.638613 | orchestrator | 2025-09-19 11:41:33.638624 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 11:41:33.638634 | orchestrator | Friday 19 September 2025 11:39:24 +0000 (0:00:00.136) 0:00:59.090 ****** 2025-09-19 11:41:33.638645 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:41:33.638656 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.638666 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.638677 | orchestrator | 2025-09-19 11:41:33.638688 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-19 11:41:33.638698 | orchestrator | Friday 19 September 2025 11:39:25 +0000 (0:00:00.965) 0:01:00.056 ****** 2025-09-19 11:41:33.638709 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:33.638720 | orchestrator | 2025-09-19 11:41:33.638730 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-19 11:41:33.638741 | orchestrator | Friday 19 September 2025 11:39:33 +0000 (0:00:07.793) 0:01:07.850 ****** 2025-09-19 11:41:33.638751 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:41:33.638762 | orchestrator | 2025-09-19 11:41:33.638772 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-19 11:41:33.638783 | orchestrator | Friday 19 September 2025 11:39:35 +0000 (0:00:02.544) 0:01:10.395 ****** 2025-09-19 11:41:33.638794 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:41:33.638804 | orchestrator | 2025-09-19 11:41:33.638815 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-19 11:41:33.638825 | orchestrator | Friday 19 September 2025 11:39:37 +0000 (0:00:02.232) 0:01:12.627 ****** 2025-09-19 11:41:33.638836 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:33.638846 | orchestrator | 2025-09-19 11:41:33.638857 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-19 11:41:33.638868 | orchestrator | Friday 19 September 2025 11:39:37 +0000 (0:00:00.119) 0:01:12.747 ****** 2025-09-19 11:41:33.638878 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:41:33.638889 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.638899 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.638910 | orchestrator | 2025-09-19 11:41:33.638921 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-19 11:41:33.638931 | orchestrator | Friday 19 September 2025 11:39:38 +0000 (0:00:00.312) 0:01:13.060 ****** 2025-09-19 11:41:33.638942 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:41:33.638952 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-19 11:41:33.638963 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:41:33.638974 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:41:33.638988 | orchestrator | 2025-09-19 11:41:33.639006 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-19 11:41:33.639018 | orchestrator | skipping: no hosts matched 2025-09-19 11:41:33.639029 | orchestrator | 2025-09-19 11:41:33.639045 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-19 11:41:33.639057 | orchestrator | 2025-09-19 11:41:33.639068 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 11:41:33.639078 | orchestrator | Friday 19 September 2025 11:39:38 +0000 (0:00:00.458) 0:01:13.519 ****** 2025-09-19 11:41:33.639089 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:41:33.639100 | orchestrator | 2025-09-19 11:41:33.639111 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 11:41:33.639180 | orchestrator | Friday 19 September 2025 11:39:56 +0000 (0:00:18.109) 0:01:31.629 ****** 2025-09-19 11:41:33.639401 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:41:33.639424 | orchestrator | 2025-09-19 11:41:33.639436 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 11:41:33.639447 | orchestrator | Friday 19 September 2025 11:40:17 +0000 (0:00:20.556) 0:01:52.185 ****** 2025-09-19 11:41:33.639458 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:41:33.639469 | orchestrator | 2025-09-19 11:41:33.639480 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-19 11:41:33.639491 | orchestrator | 2025-09-19 11:41:33.639502 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 11:41:33.639514 | orchestrator | Friday 19 September 2025 11:40:19 +0000 (0:00:02.267) 0:01:54.453 ****** 2025-09-19 11:41:33.639524 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:41:33.639536 | orchestrator | 2025-09-19 11:41:33.639547 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 11:41:33.639558 | orchestrator | Friday 19 September 2025 11:40:38 +0000 (0:00:18.638) 0:02:13.091 ****** 2025-09-19 11:41:33.639569 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:41:33.639580 | orchestrator | 2025-09-19 11:41:33.639591 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 11:41:33.639602 | orchestrator | Friday 19 September 2025 11:40:58 +0000 (0:00:20.594) 0:02:33.686 ****** 2025-09-19 11:41:33.639613 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:41:33.639624 | orchestrator | 2025-09-19 11:41:33.639635 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-19 11:41:33.639646 | orchestrator | 2025-09-19 11:41:33.639668 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 11:41:33.639679 | orchestrator | Friday 19 September 2025 11:41:01 +0000 (0:00:02.535) 0:02:36.222 ****** 2025-09-19 11:41:33.639690 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:33.639701 | orchestrator | 2025-09-19 11:41:33.639712 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 11:41:33.639723 | orchestrator | Friday 19 September 2025 11:41:12 +0000 (0:00:11.003) 0:02:47.225 ****** 2025-09-19 11:41:33.639734 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:41:33.639745 | orchestrator | 2025-09-19 11:41:33.639756 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 11:41:33.639767 | orchestrator | Friday 19 September 2025 11:41:16 +0000 (0:00:04.590) 0:02:51.816 ****** 2025-09-19 11:41:33.639778 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:41:33.639789 | orchestrator | 2025-09-19 11:41:33.639801 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-19 11:41:33.639812 | orchestrator | 2025-09-19 11:41:33.639822 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-19 11:41:33.639834 | orchestrator | Friday 19 September 2025 11:41:19 +0000 (0:00:02.413) 0:02:54.230 ****** 2025-09-19 11:41:33.639845 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:41:33.639856 | orchestrator | 2025-09-19 11:41:33.639867 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-19 11:41:33.639878 | orchestrator | Friday 19 September 2025 11:41:19 +0000 (0:00:00.476) 0:02:54.707 ****** 2025-09-19 11:41:33.639889 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.639899 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.639909 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:33.639918 | orchestrator | 2025-09-19 11:41:33.639928 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-19 11:41:33.639938 | orchestrator | Friday 19 September 2025 11:41:21 +0000 (0:00:01.931) 0:02:56.638 ****** 2025-09-19 11:41:33.639948 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.639958 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.639968 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:33.639978 | orchestrator | 2025-09-19 11:41:33.639988 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-19 11:41:33.640006 | orchestrator | Friday 19 September 2025 11:41:23 +0000 (0:00:01.949) 0:02:58.588 ****** 2025-09-19 11:41:33.640016 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.640026 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.640037 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:33.640048 | orchestrator | 2025-09-19 11:41:33.640059 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-19 11:41:33.640070 | orchestrator | Friday 19 September 2025 11:41:25 +0000 (0:00:01.822) 0:03:00.411 ****** 2025-09-19 11:41:33.640081 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.640092 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.640103 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:33.640114 | orchestrator | 2025-09-19 11:41:33.640125 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-19 11:41:33.640136 | orchestrator | Friday 19 September 2025 11:41:27 +0000 (0:00:01.995) 0:03:02.407 ****** 2025-09-19 11:41:33.640147 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:41:33.640158 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:41:33.640169 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:41:33.640180 | orchestrator | 2025-09-19 11:41:33.640191 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-19 11:41:33.640202 | orchestrator | Friday 19 September 2025 11:41:30 +0000 (0:00:02.593) 0:03:05.000 ****** 2025-09-19 11:41:33.640213 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:41:33.640225 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:41:33.640236 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:41:33.640246 | orchestrator | 2025-09-19 11:41:33.640257 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:41:33.640289 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-19 11:41:33.640302 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-19 11:41:33.640314 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-19 11:41:33.640325 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-19 11:41:33.640336 | orchestrator | 2025-09-19 11:41:33.640346 | orchestrator | 2025-09-19 11:41:33.640356 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:41:33.640366 | orchestrator | Friday 19 September 2025 11:41:30 +0000 (0:00:00.335) 0:03:05.335 ****** 2025-09-19 11:41:33.640375 | orchestrator | =============================================================================== 2025-09-19 11:41:33.640385 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.15s 2025-09-19 11:41:33.640394 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 36.75s 2025-09-19 11:41:33.640404 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.06s 2025-09-19 11:41:33.640413 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.00s 2025-09-19 11:41:33.640423 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.33s 2025-09-19 11:41:33.640433 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.79s 2025-09-19 11:41:33.640447 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.80s 2025-09-19 11:41:33.640457 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.59s 2025-09-19 11:41:33.640467 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.09s 2025-09-19 11:41:33.640476 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.95s 2025-09-19 11:41:33.640493 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.44s 2025-09-19 11:41:33.640502 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.42s 2025-09-19 11:41:33.640512 | orchestrator | Check MariaDB service --------------------------------------------------- 2.91s 2025-09-19 11:41:33.640522 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.71s 2025-09-19 11:41:33.640531 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.65s 2025-09-19 11:41:33.640541 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.59s 2025-09-19 11:41:33.640551 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.55s 2025-09-19 11:41:33.640560 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.54s 2025-09-19 11:41:33.640570 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.41s 2025-09-19 11:41:33.640579 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.23s 2025-09-19 11:41:33.640589 | orchestrator | 2025-09-19 11:41:33 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:41:33.640599 | orchestrator | 2025-09-19 11:41:33 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:41:33.640609 | orchestrator | 2025-09-19 11:41:33 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:33.640619 | orchestrator | 2025-09-19 11:41:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:36.685753 | orchestrator | 2025-09-19 11:41:36 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:41:36.687414 | orchestrator | 2025-09-19 11:41:36 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:41:36.689421 | orchestrator | 2025-09-19 11:41:36 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:36.689617 | orchestrator | 2025-09-19 11:41:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:39.723766 | orchestrator | 2025-09-19 11:41:39 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:41:39.726857 | orchestrator | 2025-09-19 11:41:39 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:41:39.728669 | orchestrator | 2025-09-19 11:41:39 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:39.728699 | orchestrator | 2025-09-19 11:41:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:42.767009 | orchestrator | 2025-09-19 11:41:42 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:41:42.768474 | orchestrator | 2025-09-19 11:41:42 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:41:42.771331 | orchestrator | 2025-09-19 11:41:42 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:42.771374 | orchestrator | 2025-09-19 11:41:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:45.815118 | orchestrator | 2025-09-19 11:41:45 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:41:45.816161 | orchestrator | 2025-09-19 11:41:45 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:41:45.817506 | orchestrator | 2025-09-19 11:41:45 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:45.817542 | orchestrator | 2025-09-19 11:41:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:48.854875 | orchestrator | 2025-09-19 11:41:48 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:41:48.855001 | orchestrator | 2025-09-19 11:41:48 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:41:48.855730 | orchestrator | 2025-09-19 11:41:48 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:48.855753 | orchestrator | 2025-09-19 11:41:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:51.889440 | orchestrator | 2025-09-19 11:41:51 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:41:51.889541 | orchestrator | 2025-09-19 11:41:51 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:41:51.890341 | orchestrator | 2025-09-19 11:41:51 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:51.890436 | orchestrator | 2025-09-19 11:41:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:54.934592 | orchestrator | 2025-09-19 11:41:54 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:41:54.937062 | orchestrator | 2025-09-19 11:41:54 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:41:54.939343 | orchestrator | 2025-09-19 11:41:54 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:54.939377 | orchestrator | 2025-09-19 11:41:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:57.976890 | orchestrator | 2025-09-19 11:41:57 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:41:57.978450 | orchestrator | 2025-09-19 11:41:57 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:41:57.981428 | orchestrator | 2025-09-19 11:41:57 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:41:57.981662 | orchestrator | 2025-09-19 11:41:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:01.015895 | orchestrator | 2025-09-19 11:42:01 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:01.017424 | orchestrator | 2025-09-19 11:42:01 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:01.019009 | orchestrator | 2025-09-19 11:42:01 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:42:01.019048 | orchestrator | 2025-09-19 11:42:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:04.065635 | orchestrator | 2025-09-19 11:42:04 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:04.067038 | orchestrator | 2025-09-19 11:42:04 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:04.068461 | orchestrator | 2025-09-19 11:42:04 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:42:04.068490 | orchestrator | 2025-09-19 11:42:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:07.105429 | orchestrator | 2025-09-19 11:42:07 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:07.105481 | orchestrator | 2025-09-19 11:42:07 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:07.105907 | orchestrator | 2025-09-19 11:42:07 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:42:07.105920 | orchestrator | 2025-09-19 11:42:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:10.142341 | orchestrator | 2025-09-19 11:42:10 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:10.143673 | orchestrator | 2025-09-19 11:42:10 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:10.145177 | orchestrator | 2025-09-19 11:42:10 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:42:10.145203 | orchestrator | 2025-09-19 11:42:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:13.181173 | orchestrator | 2025-09-19 11:42:13 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:13.182957 | orchestrator | 2025-09-19 11:42:13 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:13.184496 | orchestrator | 2025-09-19 11:42:13 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:42:13.184522 | orchestrator | 2025-09-19 11:42:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:16.226796 | orchestrator | 2025-09-19 11:42:16 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:16.226877 | orchestrator | 2025-09-19 11:42:16 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:16.228031 | orchestrator | 2025-09-19 11:42:16 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:42:16.228406 | orchestrator | 2025-09-19 11:42:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:19.267732 | orchestrator | 2025-09-19 11:42:19 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:19.268646 | orchestrator | 2025-09-19 11:42:19 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:19.270440 | orchestrator | 2025-09-19 11:42:19 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:42:19.270474 | orchestrator | 2025-09-19 11:42:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:22.310879 | orchestrator | 2025-09-19 11:42:22 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:22.311060 | orchestrator | 2025-09-19 11:42:22 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:22.311091 | orchestrator | 2025-09-19 11:42:22 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:42:22.311103 | orchestrator | 2025-09-19 11:42:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:25.355863 | orchestrator | 2025-09-19 11:42:25 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:25.358146 | orchestrator | 2025-09-19 11:42:25 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:25.360779 | orchestrator | 2025-09-19 11:42:25 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:42:25.360826 | orchestrator | 2025-09-19 11:42:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:28.424386 | orchestrator | 2025-09-19 11:42:28 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:28.427351 | orchestrator | 2025-09-19 11:42:28 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:28.429108 | orchestrator | 2025-09-19 11:42:28 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:42:28.429988 | orchestrator | 2025-09-19 11:42:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:31.477549 | orchestrator | 2025-09-19 11:42:31 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:31.479088 | orchestrator | 2025-09-19 11:42:31 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:31.482092 | orchestrator | 2025-09-19 11:42:31 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:42:31.482120 | orchestrator | 2025-09-19 11:42:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:34.528654 | orchestrator | 2025-09-19 11:42:34 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:34.530463 | orchestrator | 2025-09-19 11:42:34 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:34.532607 | orchestrator | 2025-09-19 11:42:34 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state STARTED 2025-09-19 11:42:34.532641 | orchestrator | 2025-09-19 11:42:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:37.587424 | orchestrator | 2025-09-19 11:42:37 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:37.588490 | orchestrator | 2025-09-19 11:42:37 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:37.589756 | orchestrator | 2025-09-19 11:42:37 | INFO  | Task a058b8c5-7428-4c59-af86-3df99628180a is in state STARTED 2025-09-19 11:42:37.592792 | orchestrator | 2025-09-19 11:42:37 | INFO  | Task 9b782e61-318d-462e-8b12-b7277691622a is in state SUCCESS 2025-09-19 11:42:37.594980 | orchestrator | 2025-09-19 11:42:37.595020 | orchestrator | 2025-09-19 11:42:37.595033 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-19 11:42:37.595045 | orchestrator | 2025-09-19 11:42:37.595056 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-19 11:42:37.595068 | orchestrator | Friday 19 September 2025 11:40:28 +0000 (0:00:00.529) 0:00:00.529 ****** 2025-09-19 11:42:37.595079 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:42:37.595091 | orchestrator | 2025-09-19 11:42:37.595102 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-19 11:42:37.595113 | orchestrator | Friday 19 September 2025 11:40:28 +0000 (0:00:00.511) 0:00:01.040 ****** 2025-09-19 11:42:37.595125 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:42:37.595776 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:42:37.595799 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:42:37.596279 | orchestrator | 2025-09-19 11:42:37.596321 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-19 11:42:37.596333 | orchestrator | Friday 19 September 2025 11:40:29 +0000 (0:00:00.616) 0:00:01.657 ****** 2025-09-19 11:42:37.596345 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:42:37.596356 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:42:37.596367 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:42:37.596378 | orchestrator | 2025-09-19 11:42:37.596389 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-19 11:42:37.596400 | orchestrator | Friday 19 September 2025 11:40:29 +0000 (0:00:00.331) 0:00:01.988 ****** 2025-09-19 11:42:37.596411 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:42:37.596422 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:42:37.596433 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:42:37.596444 | orchestrator | 2025-09-19 11:42:37.596455 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-19 11:42:37.596466 | orchestrator | Friday 19 September 2025 11:40:30 +0000 (0:00:00.833) 0:00:02.822 ****** 2025-09-19 11:42:37.596477 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:42:37.596487 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:42:37.596498 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:42:37.596509 | orchestrator | 2025-09-19 11:42:37.596520 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-19 11:42:37.596531 | orchestrator | Friday 19 September 2025 11:40:31 +0000 (0:00:00.295) 0:00:03.117 ****** 2025-09-19 11:42:37.596542 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:42:37.596553 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:42:37.596588 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:42:37.596599 | orchestrator | 2025-09-19 11:42:37.596610 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-19 11:42:37.596621 | orchestrator | Friday 19 September 2025 11:40:31 +0000 (0:00:00.289) 0:00:03.407 ****** 2025-09-19 11:42:37.596631 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:42:37.596642 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:42:37.596653 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:42:37.596663 | orchestrator | 2025-09-19 11:42:37.596675 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-19 11:42:37.596687 | orchestrator | Friday 19 September 2025 11:40:31 +0000 (0:00:00.304) 0:00:03.711 ****** 2025-09-19 11:42:37.596698 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.596709 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.596720 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.596731 | orchestrator | 2025-09-19 11:42:37.596742 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-19 11:42:37.596752 | orchestrator | Friday 19 September 2025 11:40:32 +0000 (0:00:00.489) 0:00:04.201 ****** 2025-09-19 11:42:37.596763 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:42:37.596774 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:42:37.596784 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:42:37.596795 | orchestrator | 2025-09-19 11:42:37.596806 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-19 11:42:37.596817 | orchestrator | Friday 19 September 2025 11:40:32 +0000 (0:00:00.293) 0:00:04.494 ****** 2025-09-19 11:42:37.596828 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 11:42:37.596838 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:42:37.596849 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:42:37.596860 | orchestrator | 2025-09-19 11:42:37.596872 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-19 11:42:37.596884 | orchestrator | Friday 19 September 2025 11:40:33 +0000 (0:00:00.678) 0:00:05.172 ****** 2025-09-19 11:42:37.596896 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:42:37.596908 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:42:37.596920 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:42:37.596932 | orchestrator | 2025-09-19 11:42:37.596944 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-19 11:42:37.596956 | orchestrator | Friday 19 September 2025 11:40:33 +0000 (0:00:00.360) 0:00:05.532 ****** 2025-09-19 11:42:37.596968 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 11:42:37.596980 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:42:37.596992 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:42:37.597004 | orchestrator | 2025-09-19 11:42:37.597016 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-19 11:42:37.597027 | orchestrator | Friday 19 September 2025 11:40:35 +0000 (0:00:02.014) 0:00:07.547 ****** 2025-09-19 11:42:37.597040 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 11:42:37.597062 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 11:42:37.597076 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 11:42:37.597087 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.597099 | orchestrator | 2025-09-19 11:42:37.597112 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-19 11:42:37.597167 | orchestrator | Friday 19 September 2025 11:40:35 +0000 (0:00:00.376) 0:00:07.923 ****** 2025-09-19 11:42:37.597183 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.597207 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.597220 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.597233 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.597244 | orchestrator | 2025-09-19 11:42:37.597279 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-19 11:42:37.597291 | orchestrator | Friday 19 September 2025 11:40:36 +0000 (0:00:00.769) 0:00:08.693 ****** 2025-09-19 11:42:37.597304 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.597318 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.597330 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.597341 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.597352 | orchestrator | 2025-09-19 11:42:37.597363 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-19 11:42:37.597374 | orchestrator | Friday 19 September 2025 11:40:36 +0000 (0:00:00.183) 0:00:08.876 ****** 2025-09-19 11:42:37.597388 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b7ae04b18f31', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-19 11:40:34.063977', 'end': '2025-09-19 11:40:34.115520', 'delta': '0:00:00.051543', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b7ae04b18f31'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-19 11:42:37.597407 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9c0c326713db', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-19 11:40:34.761506', 'end': '2025-09-19 11:40:34.801877', 'delta': '0:00:00.040371', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9c0c326713db'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-19 11:42:37.597461 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cce834b89f89', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-19 11:40:35.305309', 'end': '2025-09-19 11:40:35.360249', 'delta': '0:00:00.054940', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cce834b89f89'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-19 11:42:37.597475 | orchestrator | 2025-09-19 11:42:37.597487 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-19 11:42:37.597497 | orchestrator | Friday 19 September 2025 11:40:37 +0000 (0:00:00.385) 0:00:09.262 ****** 2025-09-19 11:42:37.597508 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:42:37.597519 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:42:37.597530 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:42:37.597540 | orchestrator | 2025-09-19 11:42:37.597552 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-19 11:42:37.597562 | orchestrator | Friday 19 September 2025 11:40:37 +0000 (0:00:00.420) 0:00:09.682 ****** 2025-09-19 11:42:37.597573 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-19 11:42:37.597584 | orchestrator | 2025-09-19 11:42:37.597594 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-19 11:42:37.597605 | orchestrator | Friday 19 September 2025 11:40:39 +0000 (0:00:01.698) 0:00:11.381 ****** 2025-09-19 11:42:37.597616 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.597626 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.597637 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.597648 | orchestrator | 2025-09-19 11:42:37.597659 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-19 11:42:37.597669 | orchestrator | Friday 19 September 2025 11:40:39 +0000 (0:00:00.266) 0:00:11.647 ****** 2025-09-19 11:42:37.597680 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.597691 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.597702 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.597712 | orchestrator | 2025-09-19 11:42:37.597723 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 11:42:37.597734 | orchestrator | Friday 19 September 2025 11:40:39 +0000 (0:00:00.365) 0:00:12.012 ****** 2025-09-19 11:42:37.597744 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.597755 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.597766 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.597777 | orchestrator | 2025-09-19 11:42:37.597787 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-19 11:42:37.597798 | orchestrator | Friday 19 September 2025 11:40:40 +0000 (0:00:00.384) 0:00:12.397 ****** 2025-09-19 11:42:37.597809 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:42:37.597820 | orchestrator | 2025-09-19 11:42:37.597831 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-19 11:42:37.597841 | orchestrator | Friday 19 September 2025 11:40:40 +0000 (0:00:00.109) 0:00:12.507 ****** 2025-09-19 11:42:37.597852 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.597863 | orchestrator | 2025-09-19 11:42:37.597874 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 11:42:37.597884 | orchestrator | Friday 19 September 2025 11:40:40 +0000 (0:00:00.210) 0:00:12.717 ****** 2025-09-19 11:42:37.597895 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.597906 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.597916 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.597934 | orchestrator | 2025-09-19 11:42:37.597945 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-19 11:42:37.597956 | orchestrator | Friday 19 September 2025 11:40:40 +0000 (0:00:00.251) 0:00:12.968 ****** 2025-09-19 11:42:37.597966 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.597977 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.597988 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.597998 | orchestrator | 2025-09-19 11:42:37.598009 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-19 11:42:37.598071 | orchestrator | Friday 19 September 2025 11:40:41 +0000 (0:00:00.277) 0:00:13.246 ****** 2025-09-19 11:42:37.598083 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.598094 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.598104 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.598115 | orchestrator | 2025-09-19 11:42:37.598126 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-19 11:42:37.598137 | orchestrator | Friday 19 September 2025 11:40:41 +0000 (0:00:00.387) 0:00:13.633 ****** 2025-09-19 11:42:37.598147 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.598158 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.598169 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.598179 | orchestrator | 2025-09-19 11:42:37.598190 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-19 11:42:37.598201 | orchestrator | Friday 19 September 2025 11:40:41 +0000 (0:00:00.293) 0:00:13.927 ****** 2025-09-19 11:42:37.598211 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.598222 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.598233 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.598306 | orchestrator | 2025-09-19 11:42:37.598318 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-19 11:42:37.598336 | orchestrator | Friday 19 September 2025 11:40:42 +0000 (0:00:00.277) 0:00:14.205 ****** 2025-09-19 11:42:37.598347 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.598357 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.598368 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.598379 | orchestrator | 2025-09-19 11:42:37.598390 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-19 11:42:37.598437 | orchestrator | Friday 19 September 2025 11:40:42 +0000 (0:00:00.282) 0:00:14.488 ****** 2025-09-19 11:42:37.598450 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.598461 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.598472 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.598483 | orchestrator | 2025-09-19 11:42:37.598493 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-19 11:42:37.598504 | orchestrator | Friday 19 September 2025 11:40:42 +0000 (0:00:00.420) 0:00:14.908 ****** 2025-09-19 11:42:37.598517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c75d7215--6866--5647--89df--878c4666c32d-osd--block--c75d7215--6866--5647--89df--878c4666c32d', 'dm-uuid-LVM-1X5jOw5YrOpdBZp1inS61cY4IZgr0qkbS00YRoEWqLvmSH5VCp59bD9C5gLTzCR0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b93a97a3--21ec--5dc9--a656--27e3bfc6d1b0-osd--block--b93a97a3--21ec--5dc9--a656--27e3bfc6d1b0', 'dm-uuid-LVM-bToMsaMj4RbkRV92dGYGektzmUyq84td1UhSOMqph4YGMZkUxddkOkY7ZYKExd3d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part1', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part14', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part15', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part16', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:42:37.598704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c75d7215--6866--5647--89df--878c4666c32d-osd--block--c75d7215--6866--5647--89df--878c4666c32d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YU1Dvu-xG3I-AwmX-XQC5-6YUC-aBPC-2Y3aoD', 'scsi-0QEMU_QEMU_HARDDISK_adddc9ff-e41b-477e-a261-fe5fa77d3a0f', 'scsi-SQEMU_QEMU_HARDDISK_adddc9ff-e41b-477e-a261-fe5fa77d3a0f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:42:37.598752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ac676d1d--4f4c--546f--a12f--f85171bcd1d7-osd--block--ac676d1d--4f4c--546f--a12f--f85171bcd1d7', 'dm-uuid-LVM-UX7zUPNGiW0Fz1MJHY71fwZ6QYfyKwS9XvDKSKF0EM6OSh31mH04XGsl3daKj1BL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b93a97a3--21ec--5dc9--a656--27e3bfc6d1b0-osd--block--b93a97a3--21ec--5dc9--a656--27e3bfc6d1b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jUBdC2-LpG7-omzw-GYkc-VKfE-4FdU-CFyZep', 'scsi-0QEMU_QEMU_HARDDISK_93b11a5e-f517-4b3c-9813-3ed2f0fa6238', 'scsi-SQEMU_QEMU_HARDDISK_93b11a5e-f517-4b3c-9813-3ed2f0fa6238'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:42:37.598777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ffd16df6--6207--59ff--a831--a7eb6df6d5c2-osd--block--ffd16df6--6207--59ff--a831--a7eb6df6d5c2', 'dm-uuid-LVM-K0PDPI4eASPQXfjB6Qa1kDA6gSTSFdCfwq1XGiLdA2E0nTHZl08q1XXALebICKB1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53ba9bad-d72e-4bb6-9573-8eecfdb7d8b6', 'scsi-SQEMU_QEMU_HARDDISK_53ba9bad-d72e-4bb6-9573-8eecfdb7d8b6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:42:37.598808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:42:37.598832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598891 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.598902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598949 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.598984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part1', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part14', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part15', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part16', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:42:37.598998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ac676d1d--4f4c--546f--a12f--f85171bcd1d7-osd--block--ac676d1d--4f4c--546f--a12f--f85171bcd1d7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Qtr002-FGlN-pk9H-NbNC-e6y9-NFqg-3tsncr', 'scsi-0QEMU_QEMU_HARDDISK_b4727c68-ff73-4ff9-aa8c-694157ecb2dd', 'scsi-SQEMU_QEMU_HARDDISK_b4727c68-ff73-4ff9-aa8c-694157ecb2dd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:42:37.599018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ffd16df6--6207--59ff--a831--a7eb6df6d5c2-osd--block--ffd16df6--6207--59ff--a831--a7eb6df6d5c2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0B1f4w-AsFN-VTXc-1xv7-VN32-2REQ-2o6M9o', 'scsi-0QEMU_QEMU_HARDDISK_39dbe9ae-8bf0-4e12-9ca8-c59aebdbd1f7', 'scsi-SQEMU_QEMU_HARDDISK_39dbe9ae-8bf0-4e12-9ca8-c59aebdbd1f7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:42:37.599030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9d0af248--3195--52cb--bed6--977ad9e4ee39-osd--block--9d0af248--3195--52cb--bed6--977ad9e4ee39', 'dm-uuid-LVM-dTXFflCdQ7PBCUHBj3A63R0WdXnAsDdED3r94jEdLUDrw7CrZG4kzyYjPZEyfmxk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.599041 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3322ab10-28f2-47f3-9821-bfcea3cb9d1d', 'scsi-SQEMU_QEMU_HARDDISK_3322ab10-28f2-47f3-9821-bfcea3cb9d1d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:42:37.599053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6e702043--5e82--5f33--ad25--d539496f9fd1-osd--block--6e702043--5e82--5f33--ad25--d539496f9fd1', 'dm-uuid-LVM-YIFZjCsRr7JIF9aCqwtdyN5XmPO2pj6JRCAnTvD3ltEse3AM0y6TFaBey5rpAVXi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.599076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:42:37.599088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.599107 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.599119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.599130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.599142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.599153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.599164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.599175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.599186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:42:37.599212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:42:37.599234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9d0af248--3195--52cb--bed6--977ad9e4ee39-osd--block--9d0af248--3195--52cb--bed6--977ad9e4ee39'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HE7Sp2-tIYZ-dcwg-7eMf-hWHx-qJLn-ck38ib', 'scsi-0QEMU_QEMU_HARDDISK_14764732-c430-42d5-be90-4134a981fa59', 'scsi-SQEMU_QEMU_HARDDISK_14764732-c430-42d5-be90-4134a981fa59'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:42:37.599246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6e702043--5e82--5f33--ad25--d539496f9fd1-osd--block--6e702043--5e82--5f33--ad25--d539496f9fd1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ag9xcB-1iLg-l4WH-1JOO-W30A-gWpl-0b8RtB', 'scsi-0QEMU_QEMU_HARDDISK_02d4d70c-9632-40cc-9453-c0d53d6148ed', 'scsi-SQEMU_QEMU_HARDDISK_02d4d70c-9632-40cc-9453-c0d53d6148ed'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:42:37.599278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29dd875d-2efb-4f11-ac43-6353645f7e36', 'scsi-SQEMU_QEMU_HARDDISK_29dd875d-2efb-4f11-ac43-6353645f7e36'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:42:37.599300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:42:37.599321 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.599332 | orchestrator | 2025-09-19 11:42:37.599343 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-19 11:42:37.599355 | orchestrator | Friday 19 September 2025 11:40:43 +0000 (0:00:00.486) 0:00:15.395 ****** 2025-09-19 11:42:37.599366 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c75d7215--6866--5647--89df--878c4666c32d-osd--block--c75d7215--6866--5647--89df--878c4666c32d', 'dm-uuid-LVM-1X5jOw5YrOpdBZp1inS61cY4IZgr0qkbS00YRoEWqLvmSH5VCp59bD9C5gLTzCR0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599378 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b93a97a3--21ec--5dc9--a656--27e3bfc6d1b0-osd--block--b93a97a3--21ec--5dc9--a656--27e3bfc6d1b0', 'dm-uuid-LVM-bToMsaMj4RbkRV92dGYGektzmUyq84td1UhSOMqph4YGMZkUxddkOkY7ZYKExd3d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599389 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599401 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599412 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599435 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599454 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599466 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599477 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599489 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ac676d1d--4f4c--546f--a12f--f85171bcd1d7-osd--block--ac676d1d--4f4c--546f--a12f--f85171bcd1d7', 'dm-uuid-LVM-UX7zUPNGiW0Fz1MJHY71fwZ6QYfyKwS9XvDKSKF0EM6OSh31mH04XGsl3daKj1BL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599500 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599522 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ffd16df6--6207--59ff--a831--a7eb6df6d5c2-osd--block--ffd16df6--6207--59ff--a831--a7eb6df6d5c2', 'dm-uuid-LVM-K0PDPI4eASPQXfjB6Qa1kDA6gSTSFdCfwq1XGiLdA2E0nTHZl08q1XXALebICKB1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599542 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part1', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part14', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part15', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part16', 'scsi-SQEMU_QEMU_HARDDISK_ffa378a0-c75b-4616-81d3-b00e624d57d0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599555 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599571 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c75d7215--6866--5647--89df--878c4666c32d-osd--block--c75d7215--6866--5647--89df--878c4666c32d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YU1Dvu-xG3I-AwmX-XQC5-6YUC-aBPC-2Y3aoD', 'scsi-0QEMU_QEMU_HARDDISK_adddc9ff-e41b-477e-a261-fe5fa77d3a0f', 'scsi-SQEMU_QEMU_HARDDISK_adddc9ff-e41b-477e-a261-fe5fa77d3a0f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599596 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599608 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b93a97a3--21ec--5dc9--a656--27e3bfc6d1b0-osd--block--b93a97a3--21ec--5dc9--a656--27e3bfc6d1b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jUBdC2-LpG7-omzw-GYkc-VKfE-4FdU-CFyZep', 'scsi-0QEMU_QEMU_HARDDISK_93b11a5e-f517-4b3c-9813-3ed2f0fa6238', 'scsi-SQEMU_QEMU_HARDDISK_93b11a5e-f517-4b3c-9813-3ed2f0fa6238'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599620 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599632 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53ba9bad-d72e-4bb6-9573-8eecfdb7d8b6', 'scsi-SQEMU_QEMU_HARDDISK_53ba9bad-d72e-4bb6-9573-8eecfdb7d8b6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599643 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599676 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599688 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599700 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.599711 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599723 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599734 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599760 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part1', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part14', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part15', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part16', 'scsi-SQEMU_QEMU_HARDDISK_42505943-ab11-4a68-89b8-1d4f3cc4dc03-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599780 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ac676d1d--4f4c--546f--a12f--f85171bcd1d7-osd--block--ac676d1d--4f4c--546f--a12f--f85171bcd1d7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Qtr002-FGlN-pk9H-NbNC-e6y9-NFqg-3tsncr', 'scsi-0QEMU_QEMU_HARDDISK_b4727c68-ff73-4ff9-aa8c-694157ecb2dd', 'scsi-SQEMU_QEMU_HARDDISK_b4727c68-ff73-4ff9-aa8c-694157ecb2dd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599792 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ffd16df6--6207--59ff--a831--a7eb6df6d5c2-osd--block--ffd16df6--6207--59ff--a831--a7eb6df6d5c2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0B1f4w-AsFN-VTXc-1xv7-VN32-2REQ-2o6M9o', 'scsi-0QEMU_QEMU_HARDDISK_39dbe9ae-8bf0-4e12-9ca8-c59aebdbd1f7', 'scsi-SQEMU_QEMU_HARDDISK_39dbe9ae-8bf0-4e12-9ca8-c59aebdbd1f7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599804 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3322ab10-28f2-47f3-9821-bfcea3cb9d1d', 'scsi-SQEMU_QEMU_HARDDISK_3322ab10-28f2-47f3-9821-bfcea3cb9d1d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599834 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599846 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.599857 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9d0af248--3195--52cb--bed6--977ad9e4ee39-osd--block--9d0af248--3195--52cb--bed6--977ad9e4ee39', 'dm-uuid-LVM-dTXFflCdQ7PBCUHBj3A63R0WdXnAsDdED3r94jEdLUDrw7CrZG4kzyYjPZEyfmxk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599869 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6e702043--5e82--5f33--ad25--d539496f9fd1-osd--block--6e702043--5e82--5f33--ad25--d539496f9fd1', 'dm-uuid-LVM-YIFZjCsRr7JIF9aCqwtdyN5XmPO2pj6JRCAnTvD3ltEse3AM0y6TFaBey5rpAVXi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599880 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599892 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599913 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599935 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599947 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599959 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599970 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.599982 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.600005 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_0f197da4-9977-4ed0-ade0-de83f43b89ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.600025 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9d0af248--3195--52cb--bed6--977ad9e4ee39-osd--block--9d0af248--3195--52cb--bed6--977ad9e4ee39'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HE7Sp2-tIYZ-dcwg-7eMf-hWHx-qJLn-ck38ib', 'scsi-0QEMU_QEMU_HARDDISK_14764732-c430-42d5-be90-4134a981fa59', 'scsi-SQEMU_QEMU_HARDDISK_14764732-c430-42d5-be90-4134a981fa59'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.600037 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6e702043--5e82--5f33--ad25--d539496f9fd1-osd--block--6e702043--5e82--5f33--ad25--d539496f9fd1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ag9xcB-1iLg-l4WH-1JOO-W30A-gWpl-0b8RtB', 'scsi-0QEMU_QEMU_HARDDISK_02d4d70c-9632-40cc-9453-c0d53d6148ed', 'scsi-SQEMU_QEMU_HARDDISK_02d4d70c-9632-40cc-9453-c0d53d6148ed'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.600055 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29dd875d-2efb-4f11-ac43-6353645f7e36', 'scsi-SQEMU_QEMU_HARDDISK_29dd875d-2efb-4f11-ac43-6353645f7e36'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.600077 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:42:37.600089 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.600100 | orchestrator | 2025-09-19 11:42:37.600111 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-19 11:42:37.600122 | orchestrator | Friday 19 September 2025 11:40:43 +0000 (0:00:00.568) 0:00:15.964 ****** 2025-09-19 11:42:37.600133 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:42:37.600144 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:42:37.600155 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:42:37.600165 | orchestrator | 2025-09-19 11:42:37.600176 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-19 11:42:37.600187 | orchestrator | Friday 19 September 2025 11:40:44 +0000 (0:00:00.638) 0:00:16.602 ****** 2025-09-19 11:42:37.600198 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:42:37.600209 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:42:37.600219 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:42:37.600230 | orchestrator | 2025-09-19 11:42:37.600241 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 11:42:37.600268 | orchestrator | Friday 19 September 2025 11:40:44 +0000 (0:00:00.367) 0:00:16.970 ****** 2025-09-19 11:42:37.600279 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:42:37.600290 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:42:37.600301 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:42:37.600311 | orchestrator | 2025-09-19 11:42:37.600322 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 11:42:37.600333 | orchestrator | Friday 19 September 2025 11:40:45 +0000 (0:00:00.631) 0:00:17.602 ****** 2025-09-19 11:42:37.600344 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.600355 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.600366 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.600377 | orchestrator | 2025-09-19 11:42:37.600388 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 11:42:37.600398 | orchestrator | Friday 19 September 2025 11:40:45 +0000 (0:00:00.255) 0:00:17.857 ****** 2025-09-19 11:42:37.600409 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.600420 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.600431 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.600442 | orchestrator | 2025-09-19 11:42:37.600452 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 11:42:37.600463 | orchestrator | Friday 19 September 2025 11:40:46 +0000 (0:00:00.350) 0:00:18.208 ****** 2025-09-19 11:42:37.600481 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.600492 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.600502 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.600513 | orchestrator | 2025-09-19 11:42:37.600524 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-19 11:42:37.600535 | orchestrator | Friday 19 September 2025 11:40:46 +0000 (0:00:00.381) 0:00:18.590 ****** 2025-09-19 11:42:37.600545 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-19 11:42:37.600557 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-19 11:42:37.600568 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-19 11:42:37.600578 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-19 11:42:37.600589 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-19 11:42:37.600600 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-19 11:42:37.600611 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-19 11:42:37.600621 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-19 11:42:37.600632 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-19 11:42:37.600643 | orchestrator | 2025-09-19 11:42:37.600654 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-19 11:42:37.600665 | orchestrator | Friday 19 September 2025 11:40:47 +0000 (0:00:00.758) 0:00:19.349 ****** 2025-09-19 11:42:37.600676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 11:42:37.600686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 11:42:37.600697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 11:42:37.600708 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.600718 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 11:42:37.600729 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 11:42:37.600740 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 11:42:37.600750 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.600761 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 11:42:37.600772 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 11:42:37.600782 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 11:42:37.600793 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.600804 | orchestrator | 2025-09-19 11:42:37.600815 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-19 11:42:37.600826 | orchestrator | Friday 19 September 2025 11:40:47 +0000 (0:00:00.318) 0:00:19.667 ****** 2025-09-19 11:42:37.600837 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:42:37.600848 | orchestrator | 2025-09-19 11:42:37.600864 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 11:42:37.600875 | orchestrator | Friday 19 September 2025 11:40:48 +0000 (0:00:00.578) 0:00:20.246 ****** 2025-09-19 11:42:37.600887 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.600898 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.600908 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.600919 | orchestrator | 2025-09-19 11:42:37.600936 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 11:42:37.600947 | orchestrator | Friday 19 September 2025 11:40:48 +0000 (0:00:00.281) 0:00:20.528 ****** 2025-09-19 11:42:37.600958 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.600969 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.600980 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.600991 | orchestrator | 2025-09-19 11:42:37.601001 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 11:42:37.601020 | orchestrator | Friday 19 September 2025 11:40:48 +0000 (0:00:00.268) 0:00:20.796 ****** 2025-09-19 11:42:37.601031 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.601042 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.601053 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:37.601064 | orchestrator | 2025-09-19 11:42:37.601075 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 11:42:37.601086 | orchestrator | Friday 19 September 2025 11:40:49 +0000 (0:00:00.271) 0:00:21.068 ****** 2025-09-19 11:42:37.601097 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:42:37.601107 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:42:37.601118 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:42:37.601129 | orchestrator | 2025-09-19 11:42:37.601140 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 11:42:37.601151 | orchestrator | Friday 19 September 2025 11:40:49 +0000 (0:00:00.470) 0:00:21.539 ****** 2025-09-19 11:42:37.601161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:42:37.601172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:42:37.601183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:42:37.601194 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.601205 | orchestrator | 2025-09-19 11:42:37.601216 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 11:42:37.601226 | orchestrator | Friday 19 September 2025 11:40:49 +0000 (0:00:00.367) 0:00:21.906 ****** 2025-09-19 11:42:37.601237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:42:37.601301 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:42:37.601314 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:42:37.601325 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.601335 | orchestrator | 2025-09-19 11:42:37.601346 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 11:42:37.601357 | orchestrator | Friday 19 September 2025 11:40:50 +0000 (0:00:00.343) 0:00:22.249 ****** 2025-09-19 11:42:37.601368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:42:37.601379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:42:37.601389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:42:37.601400 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.601411 | orchestrator | 2025-09-19 11:42:37.601422 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 11:42:37.601433 | orchestrator | Friday 19 September 2025 11:40:50 +0000 (0:00:00.369) 0:00:22.619 ****** 2025-09-19 11:42:37.601444 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:42:37.601454 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:42:37.601465 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:42:37.601476 | orchestrator | 2025-09-19 11:42:37.601487 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 11:42:37.601497 | orchestrator | Friday 19 September 2025 11:40:50 +0000 (0:00:00.273) 0:00:22.893 ****** 2025-09-19 11:42:37.601508 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 11:42:37.601519 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 11:42:37.601530 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 11:42:37.601540 | orchestrator | 2025-09-19 11:42:37.601551 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-19 11:42:37.601562 | orchestrator | Friday 19 September 2025 11:40:51 +0000 (0:00:00.442) 0:00:23.335 ****** 2025-09-19 11:42:37.601573 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 11:42:37.601583 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:42:37.601594 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:42:37.601605 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 11:42:37.601627 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 11:42:37.601638 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 11:42:37.601648 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 11:42:37.601659 | orchestrator | 2025-09-19 11:42:37.601670 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-19 11:42:37.601681 | orchestrator | Friday 19 September 2025 11:40:52 +0000 (0:00:00.831) 0:00:24.166 ****** 2025-09-19 11:42:37.601691 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 11:42:37.601702 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:42:37.601713 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:42:37.601724 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 11:42:37.601739 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 11:42:37.601750 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 11:42:37.601761 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 11:42:37.601772 | orchestrator | 2025-09-19 11:42:37.601788 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-19 11:42:37.601800 | orchestrator | Friday 19 September 2025 11:40:53 +0000 (0:00:01.616) 0:00:25.782 ****** 2025-09-19 11:42:37.601809 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:37.601819 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:37.601829 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-19 11:42:37.601838 | orchestrator | 2025-09-19 11:42:37.601848 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-19 11:42:37.601857 | orchestrator | Friday 19 September 2025 11:40:54 +0000 (0:00:00.331) 0:00:26.113 ****** 2025-09-19 11:42:37.601868 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 11:42:37.601878 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 11:42:37.601888 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 11:42:37.601898 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 11:42:37.601908 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 11:42:37.601918 | orchestrator | 2025-09-19 11:42:37.601927 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-19 11:42:37.601937 | orchestrator | Friday 19 September 2025 11:41:40 +0000 (0:00:46.039) 0:01:12.153 ****** 2025-09-19 11:42:37.601946 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.601962 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.601972 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.601981 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.601991 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.602000 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.602010 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-19 11:42:37.602047 | orchestrator | 2025-09-19 11:42:37.602057 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-19 11:42:37.602067 | orchestrator | Friday 19 September 2025 11:42:03 +0000 (0:00:23.630) 0:01:35.783 ****** 2025-09-19 11:42:37.602076 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.602086 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.602095 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.602105 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.602114 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.602124 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.602133 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 11:42:37.602142 | orchestrator | 2025-09-19 11:42:37.602152 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-19 11:42:37.602161 | orchestrator | Friday 19 September 2025 11:42:15 +0000 (0:00:11.991) 0:01:47.775 ****** 2025-09-19 11:42:37.602171 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.602180 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 11:42:37.602190 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 11:42:37.602200 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.602209 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 11:42:37.602219 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 11:42:37.602234 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.602245 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 11:42:37.602271 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 11:42:37.602281 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.602291 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 11:42:37.602300 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 11:42:37.602310 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.602319 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 11:42:37.602329 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 11:42:37.602338 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:42:37.602348 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 11:42:37.602358 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 11:42:37.602367 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-19 11:42:37.602387 | orchestrator | 2025-09-19 11:42:37.602397 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:42:37.602407 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-19 11:42:37.602417 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 11:42:37.602427 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-19 11:42:37.602437 | orchestrator | 2025-09-19 11:42:37.602446 | orchestrator | 2025-09-19 11:42:37.602456 | orchestrator | 2025-09-19 11:42:37.602465 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:42:37.602475 | orchestrator | Friday 19 September 2025 11:42:34 +0000 (0:00:18.465) 0:02:06.240 ****** 2025-09-19 11:42:37.602484 | orchestrator | =============================================================================== 2025-09-19 11:42:37.602494 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.04s 2025-09-19 11:42:37.602504 | orchestrator | generate keys ---------------------------------------------------------- 23.63s 2025-09-19 11:42:37.602513 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.47s 2025-09-19 11:42:37.602523 | orchestrator | get keys from monitors ------------------------------------------------- 11.99s 2025-09-19 11:42:37.602532 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.01s 2025-09-19 11:42:37.602542 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.70s 2025-09-19 11:42:37.602632 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.62s 2025-09-19 11:42:37.602655 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.83s 2025-09-19 11:42:37.602664 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.83s 2025-09-19 11:42:37.602674 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.77s 2025-09-19 11:42:37.602684 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.76s 2025-09-19 11:42:37.602693 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.68s 2025-09-19 11:42:37.602703 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.64s 2025-09-19 11:42:37.602712 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.63s 2025-09-19 11:42:37.602722 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.62s 2025-09-19 11:42:37.602731 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.58s 2025-09-19 11:42:37.602741 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.57s 2025-09-19 11:42:37.602750 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.51s 2025-09-19 11:42:37.602760 | orchestrator | ceph-facts : Set_fact discovered_interpreter_python if not previously set --- 0.49s 2025-09-19 11:42:37.602769 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.49s 2025-09-19 11:42:37.602779 | orchestrator | 2025-09-19 11:42:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:40.645030 | orchestrator | 2025-09-19 11:42:40 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:40.646517 | orchestrator | 2025-09-19 11:42:40 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:40.648699 | orchestrator | 2025-09-19 11:42:40 | INFO  | Task a058b8c5-7428-4c59-af86-3df99628180a is in state STARTED 2025-09-19 11:42:40.648866 | orchestrator | 2025-09-19 11:42:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:43.691011 | orchestrator | 2025-09-19 11:42:43 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:43.692717 | orchestrator | 2025-09-19 11:42:43 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:43.694207 | orchestrator | 2025-09-19 11:42:43 | INFO  | Task a058b8c5-7428-4c59-af86-3df99628180a is in state STARTED 2025-09-19 11:42:43.694554 | orchestrator | 2025-09-19 11:42:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:46.741800 | orchestrator | 2025-09-19 11:42:46 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:46.743689 | orchestrator | 2025-09-19 11:42:46 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:46.745862 | orchestrator | 2025-09-19 11:42:46 | INFO  | Task a058b8c5-7428-4c59-af86-3df99628180a is in state STARTED 2025-09-19 11:42:46.746141 | orchestrator | 2025-09-19 11:42:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:49.786988 | orchestrator | 2025-09-19 11:42:49 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:49.788582 | orchestrator | 2025-09-19 11:42:49 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:49.790772 | orchestrator | 2025-09-19 11:42:49 | INFO  | Task a058b8c5-7428-4c59-af86-3df99628180a is in state STARTED 2025-09-19 11:42:49.790805 | orchestrator | 2025-09-19 11:42:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:52.839051 | orchestrator | 2025-09-19 11:42:52 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:52.839158 | orchestrator | 2025-09-19 11:42:52 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:52.840859 | orchestrator | 2025-09-19 11:42:52 | INFO  | Task a058b8c5-7428-4c59-af86-3df99628180a is in state STARTED 2025-09-19 11:42:52.840886 | orchestrator | 2025-09-19 11:42:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:55.898639 | orchestrator | 2025-09-19 11:42:55 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:55.899585 | orchestrator | 2025-09-19 11:42:55 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:55.901855 | orchestrator | 2025-09-19 11:42:55 | INFO  | Task a058b8c5-7428-4c59-af86-3df99628180a is in state STARTED 2025-09-19 11:42:55.902143 | orchestrator | 2025-09-19 11:42:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:58.947549 | orchestrator | 2025-09-19 11:42:58 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:42:58.949349 | orchestrator | 2025-09-19 11:42:58 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:42:58.951119 | orchestrator | 2025-09-19 11:42:58 | INFO  | Task a058b8c5-7428-4c59-af86-3df99628180a is in state STARTED 2025-09-19 11:42:58.951468 | orchestrator | 2025-09-19 11:42:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:01.993421 | orchestrator | 2025-09-19 11:43:01 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:43:01.994273 | orchestrator | 2025-09-19 11:43:01 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:01.996497 | orchestrator | 2025-09-19 11:43:01 | INFO  | Task a058b8c5-7428-4c59-af86-3df99628180a is in state STARTED 2025-09-19 11:43:01.997019 | orchestrator | 2025-09-19 11:43:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:05.041295 | orchestrator | 2025-09-19 11:43:05 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:43:05.043102 | orchestrator | 2025-09-19 11:43:05 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:05.044612 | orchestrator | 2025-09-19 11:43:05 | INFO  | Task a058b8c5-7428-4c59-af86-3df99628180a is in state SUCCESS 2025-09-19 11:43:05.046266 | orchestrator | 2025-09-19 11:43:05 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:05.046546 | orchestrator | 2025-09-19 11:43:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:08.098417 | orchestrator | 2025-09-19 11:43:08 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state STARTED 2025-09-19 11:43:08.098987 | orchestrator | 2025-09-19 11:43:08 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:08.100786 | orchestrator | 2025-09-19 11:43:08 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:08.100856 | orchestrator | 2025-09-19 11:43:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:11.159441 | orchestrator | 2025-09-19 11:43:11.159539 | orchestrator | 2025-09-19 11:43:11.159553 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-19 11:43:11.159569 | orchestrator | 2025-09-19 11:43:11.159588 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-19 11:43:11.159606 | orchestrator | Friday 19 September 2025 11:42:38 +0000 (0:00:00.156) 0:00:00.156 ****** 2025-09-19 11:43:11.159624 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-19 11:43:11.159659 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 11:43:11.159671 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 11:43:11.159751 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 11:43:11.160299 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 11:43:11.160315 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-19 11:43:11.160326 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-19 11:43:11.160338 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-19 11:43:11.160349 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-19 11:43:11.160360 | orchestrator | 2025-09-19 11:43:11.160371 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-19 11:43:11.160382 | orchestrator | Friday 19 September 2025 11:42:43 +0000 (0:00:04.392) 0:00:04.549 ****** 2025-09-19 11:43:11.160394 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 11:43:11.160406 | orchestrator | 2025-09-19 11:43:11.160416 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-19 11:43:11.160427 | orchestrator | Friday 19 September 2025 11:42:43 +0000 (0:00:00.964) 0:00:05.513 ****** 2025-09-19 11:43:11.160438 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-19 11:43:11.160451 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 11:43:11.160462 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 11:43:11.160473 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 11:43:11.160484 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 11:43:11.160494 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-19 11:43:11.160505 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-19 11:43:11.160544 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-19 11:43:11.160556 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-19 11:43:11.160567 | orchestrator | 2025-09-19 11:43:11.160578 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-19 11:43:11.160588 | orchestrator | Friday 19 September 2025 11:42:57 +0000 (0:00:13.061) 0:00:18.575 ****** 2025-09-19 11:43:11.160600 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-19 11:43:11.160611 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 11:43:11.160622 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 11:43:11.160632 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 11:43:11.160643 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 11:43:11.160654 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-19 11:43:11.160665 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-19 11:43:11.160676 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-19 11:43:11.160686 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-19 11:43:11.160697 | orchestrator | 2025-09-19 11:43:11.160708 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:43:11.160719 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:43:11.160731 | orchestrator | 2025-09-19 11:43:11.160742 | orchestrator | 2025-09-19 11:43:11.160754 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:43:11.160765 | orchestrator | Friday 19 September 2025 11:43:03 +0000 (0:00:06.678) 0:00:25.254 ****** 2025-09-19 11:43:11.160775 | orchestrator | =============================================================================== 2025-09-19 11:43:11.160786 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.06s 2025-09-19 11:43:11.160796 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.68s 2025-09-19 11:43:11.160822 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.39s 2025-09-19 11:43:11.160833 | orchestrator | Create share directory -------------------------------------------------- 0.96s 2025-09-19 11:43:11.160844 | orchestrator | 2025-09-19 11:43:11.160854 | orchestrator | 2025-09-19 11:43:11.160865 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:43:11.160876 | orchestrator | 2025-09-19 11:43:11.160904 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:43:11.160915 | orchestrator | Friday 19 September 2025 11:41:34 +0000 (0:00:00.238) 0:00:00.238 ****** 2025-09-19 11:43:11.160926 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:43:11.160937 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:43:11.160948 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:43:11.160959 | orchestrator | 2025-09-19 11:43:11.160970 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:43:11.160981 | orchestrator | Friday 19 September 2025 11:41:34 +0000 (0:00:00.306) 0:00:00.544 ****** 2025-09-19 11:43:11.160992 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-19 11:43:11.161003 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-19 11:43:11.161014 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-19 11:43:11.161025 | orchestrator | 2025-09-19 11:43:11.161036 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-19 11:43:11.161046 | orchestrator | 2025-09-19 11:43:11.161057 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 11:43:11.161068 | orchestrator | Friday 19 September 2025 11:41:35 +0000 (0:00:00.480) 0:00:01.024 ****** 2025-09-19 11:43:11.161088 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:43:11.161099 | orchestrator | 2025-09-19 11:43:11.161110 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-19 11:43:11.161121 | orchestrator | Friday 19 September 2025 11:41:35 +0000 (0:00:00.544) 0:00:01.569 ****** 2025-09-19 11:43:11.161141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:43:11.161187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:43:11.161209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:43:11.161303 | orchestrator | 2025-09-19 11:43:11.161430 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-19 11:43:11.161444 | orchestrator | Friday 19 September 2025 11:41:37 +0000 (0:00:01.315) 0:00:02.884 ****** 2025-09-19 11:43:11.161456 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:43:11.161467 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:43:11.161478 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:43:11.161489 | orchestrator | 2025-09-19 11:43:11.161500 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 11:43:11.161517 | orchestrator | Friday 19 September 2025 11:41:37 +0000 (0:00:00.474) 0:00:03.358 ****** 2025-09-19 11:43:11.161528 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 11:43:11.161539 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 11:43:11.161559 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 11:43:11.161570 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 11:43:11.161581 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 11:43:11.161602 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 11:43:11.161613 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-19 11:43:11.161624 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 11:43:11.161635 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 11:43:11.161646 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 11:43:11.161657 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 11:43:11.161667 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 11:43:11.161678 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 11:43:11.161689 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 11:43:11.161700 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-19 11:43:11.161710 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 11:43:11.161721 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 11:43:11.161732 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 11:43:11.161743 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 11:43:11.161754 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 11:43:11.161764 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 11:43:11.161775 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 11:43:11.161786 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-19 11:43:11.161796 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 11:43:11.161808 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-19 11:43:11.161821 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-19 11:43:11.161832 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-19 11:43:11.161843 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-19 11:43:11.161854 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-19 11:43:11.161865 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-19 11:43:11.161876 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-19 11:43:11.161887 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-19 11:43:11.161897 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-19 11:43:11.161909 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-19 11:43:11.161927 | orchestrator | 2025-09-19 11:43:11.161938 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:43:11.161949 | orchestrator | Friday 19 September 2025 11:41:38 +0000 (0:00:00.782) 0:00:04.141 ****** 2025-09-19 11:43:11.161960 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:43:11.161971 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:43:11.161981 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:43:11.161992 | orchestrator | 2025-09-19 11:43:11.162008 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:43:11.162082 | orchestrator | Friday 19 September 2025 11:41:38 +0000 (0:00:00.323) 0:00:04.465 ****** 2025-09-19 11:43:11.162096 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.162108 | orchestrator | 2025-09-19 11:43:11.162120 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:43:11.162139 | orchestrator | Friday 19 September 2025 11:41:38 +0000 (0:00:00.139) 0:00:04.604 ****** 2025-09-19 11:43:11.162153 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.162165 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:11.162177 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:11.162189 | orchestrator | 2025-09-19 11:43:11.162201 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:43:11.162213 | orchestrator | Friday 19 September 2025 11:41:39 +0000 (0:00:00.455) 0:00:05.060 ****** 2025-09-19 11:43:11.162225 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:43:11.162296 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:43:11.162310 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:43:11.162322 | orchestrator | 2025-09-19 11:43:11.162334 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:43:11.162346 | orchestrator | Friday 19 September 2025 11:41:39 +0000 (0:00:00.327) 0:00:05.388 ****** 2025-09-19 11:43:11.162358 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.162370 | orchestrator | 2025-09-19 11:43:11.162382 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:43:11.162394 | orchestrator | Friday 19 September 2025 11:41:39 +0000 (0:00:00.143) 0:00:05.532 ****** 2025-09-19 11:43:11.162407 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.162419 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:11.162430 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:11.162441 | orchestrator | 2025-09-19 11:43:11.162451 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:43:11.162462 | orchestrator | Friday 19 September 2025 11:41:40 +0000 (0:00:00.316) 0:00:05.848 ****** 2025-09-19 11:43:11.162473 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:43:11.162484 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:43:11.162495 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:43:11.162505 | orchestrator | 2025-09-19 11:43:11.162516 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:43:11.162527 | orchestrator | Friday 19 September 2025 11:41:40 +0000 (0:00:00.365) 0:00:06.214 ****** 2025-09-19 11:43:11.162538 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.162549 | orchestrator | 2025-09-19 11:43:11.162560 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:43:11.162570 | orchestrator | Friday 19 September 2025 11:41:40 +0000 (0:00:00.115) 0:00:06.329 ****** 2025-09-19 11:43:11.162581 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.162592 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:11.162603 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:11.162614 | orchestrator | 2025-09-19 11:43:11.162624 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:43:11.162635 | orchestrator | Friday 19 September 2025 11:41:41 +0000 (0:00:00.497) 0:00:06.827 ****** 2025-09-19 11:43:11.162646 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:43:11.162657 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:43:11.162667 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:43:11.162688 | orchestrator | 2025-09-19 11:43:11.162699 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:43:11.162710 | orchestrator | Friday 19 September 2025 11:41:41 +0000 (0:00:00.325) 0:00:07.152 ****** 2025-09-19 11:43:11.162721 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.162731 | orchestrator | 2025-09-19 11:43:11.162742 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:43:11.162752 | orchestrator | Friday 19 September 2025 11:41:41 +0000 (0:00:00.134) 0:00:07.287 ****** 2025-09-19 11:43:11.162762 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.162771 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:11.162781 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:11.162790 | orchestrator | 2025-09-19 11:43:11.162800 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:43:11.162809 | orchestrator | Friday 19 September 2025 11:41:41 +0000 (0:00:00.296) 0:00:07.583 ****** 2025-09-19 11:43:11.162819 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:43:11.162829 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:43:11.162838 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:43:11.162848 | orchestrator | 2025-09-19 11:43:11.162857 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:43:11.162867 | orchestrator | Friday 19 September 2025 11:41:42 +0000 (0:00:00.475) 0:00:08.059 ****** 2025-09-19 11:43:11.162877 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.162886 | orchestrator | 2025-09-19 11:43:11.162896 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:43:11.162906 | orchestrator | Friday 19 September 2025 11:41:42 +0000 (0:00:00.148) 0:00:08.207 ****** 2025-09-19 11:43:11.162915 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.162925 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:11.162934 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:11.162944 | orchestrator | 2025-09-19 11:43:11.162953 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:43:11.162963 | orchestrator | Friday 19 September 2025 11:41:42 +0000 (0:00:00.330) 0:00:08.537 ****** 2025-09-19 11:43:11.162973 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:43:11.162982 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:43:11.162992 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:43:11.163001 | orchestrator | 2025-09-19 11:43:11.163011 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:43:11.163021 | orchestrator | Friday 19 September 2025 11:41:43 +0000 (0:00:00.342) 0:00:08.880 ****** 2025-09-19 11:43:11.163030 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.163040 | orchestrator | 2025-09-19 11:43:11.163049 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:43:11.163059 | orchestrator | Friday 19 September 2025 11:41:43 +0000 (0:00:00.135) 0:00:09.016 ****** 2025-09-19 11:43:11.163069 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.163079 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:11.163088 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:11.163098 | orchestrator | 2025-09-19 11:43:11.163112 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:43:11.163122 | orchestrator | Friday 19 September 2025 11:41:43 +0000 (0:00:00.287) 0:00:09.303 ****** 2025-09-19 11:43:11.163132 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:43:11.163141 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:43:11.163151 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:43:11.163160 | orchestrator | 2025-09-19 11:43:11.163176 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:43:11.163186 | orchestrator | Friday 19 September 2025 11:41:44 +0000 (0:00:00.503) 0:00:09.806 ****** 2025-09-19 11:43:11.163196 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.163205 | orchestrator | 2025-09-19 11:43:11.163215 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:43:11.163249 | orchestrator | Friday 19 September 2025 11:41:44 +0000 (0:00:00.128) 0:00:09.935 ****** 2025-09-19 11:43:11.163259 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.163268 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:11.163278 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:11.163287 | orchestrator | 2025-09-19 11:43:11.163297 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:43:11.163306 | orchestrator | Friday 19 September 2025 11:41:44 +0000 (0:00:00.288) 0:00:10.224 ****** 2025-09-19 11:43:11.163316 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:43:11.163325 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:43:11.163335 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:43:11.163344 | orchestrator | 2025-09-19 11:43:11.163354 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:43:11.163363 | orchestrator | Friday 19 September 2025 11:41:44 +0000 (0:00:00.310) 0:00:10.534 ****** 2025-09-19 11:43:11.163373 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.163383 | orchestrator | 2025-09-19 11:43:11.163392 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:43:11.163402 | orchestrator | Friday 19 September 2025 11:41:44 +0000 (0:00:00.124) 0:00:10.659 ****** 2025-09-19 11:43:11.163411 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.163421 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:11.163430 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:11.163440 | orchestrator | 2025-09-19 11:43:11.163449 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:43:11.163459 | orchestrator | Friday 19 September 2025 11:41:45 +0000 (0:00:00.280) 0:00:10.940 ****** 2025-09-19 11:43:11.163468 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:43:11.163478 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:43:11.163487 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:43:11.163497 | orchestrator | 2025-09-19 11:43:11.163506 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:43:11.163516 | orchestrator | Friday 19 September 2025 11:41:45 +0000 (0:00:00.576) 0:00:11.517 ****** 2025-09-19 11:43:11.163525 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.163535 | orchestrator | 2025-09-19 11:43:11.163544 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:43:11.163554 | orchestrator | Friday 19 September 2025 11:41:45 +0000 (0:00:00.134) 0:00:11.651 ****** 2025-09-19 11:43:11.163563 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.163573 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:11.163582 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:11.163592 | orchestrator | 2025-09-19 11:43:11.163602 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:43:11.163611 | orchestrator | Friday 19 September 2025 11:41:46 +0000 (0:00:00.287) 0:00:11.938 ****** 2025-09-19 11:43:11.163621 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:43:11.163630 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:43:11.163640 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:43:11.163649 | orchestrator | 2025-09-19 11:43:11.163659 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:43:11.163668 | orchestrator | Friday 19 September 2025 11:41:46 +0000 (0:00:00.326) 0:00:12.265 ****** 2025-09-19 11:43:11.163678 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.163687 | orchestrator | 2025-09-19 11:43:11.163697 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:43:11.163706 | orchestrator | Friday 19 September 2025 11:41:46 +0000 (0:00:00.137) 0:00:12.402 ****** 2025-09-19 11:43:11.163716 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.163725 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:11.163735 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:11.163744 | orchestrator | 2025-09-19 11:43:11.163754 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-19 11:43:11.163763 | orchestrator | Friday 19 September 2025 11:41:47 +0000 (0:00:00.509) 0:00:12.911 ****** 2025-09-19 11:43:11.163780 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:43:11.163790 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:43:11.163799 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:43:11.163809 | orchestrator | 2025-09-19 11:43:11.163818 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-19 11:43:11.163828 | orchestrator | Friday 19 September 2025 11:41:48 +0000 (0:00:01.687) 0:00:14.598 ****** 2025-09-19 11:43:11.163837 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 11:43:11.163847 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 11:43:11.163856 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 11:43:11.163866 | orchestrator | 2025-09-19 11:43:11.163875 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-19 11:43:11.163885 | orchestrator | Friday 19 September 2025 11:41:50 +0000 (0:00:01.943) 0:00:16.542 ****** 2025-09-19 11:43:11.163894 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 11:43:11.163904 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 11:43:11.163924 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 11:43:11.163934 | orchestrator | 2025-09-19 11:43:11.163944 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-19 11:43:11.163954 | orchestrator | Friday 19 September 2025 11:41:53 +0000 (0:00:02.388) 0:00:18.931 ****** 2025-09-19 11:43:11.163969 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 11:43:11.163979 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 11:43:11.163989 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 11:43:11.163998 | orchestrator | 2025-09-19 11:43:11.164007 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-19 11:43:11.164017 | orchestrator | Friday 19 September 2025 11:41:55 +0000 (0:00:02.092) 0:00:21.023 ****** 2025-09-19 11:43:11.164026 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.164036 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:11.164045 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:11.164055 | orchestrator | 2025-09-19 11:43:11.164064 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-19 11:43:11.164074 | orchestrator | Friday 19 September 2025 11:41:55 +0000 (0:00:00.350) 0:00:21.373 ****** 2025-09-19 11:43:11.164083 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.164093 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:11.164102 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:11.164112 | orchestrator | 2025-09-19 11:43:11.164122 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 11:43:11.164131 | orchestrator | Friday 19 September 2025 11:41:55 +0000 (0:00:00.308) 0:00:21.682 ****** 2025-09-19 11:43:11.164140 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:43:11.164150 | orchestrator | 2025-09-19 11:43:11.164160 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-19 11:43:11.164169 | orchestrator | Friday 19 September 2025 11:41:56 +0000 (0:00:00.560) 0:00:22.243 ****** 2025-09-19 11:43:11.164180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:43:11.164213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:43:11.164226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:43:11.164283 | orchestrator | 2025-09-19 11:43:11.164293 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-19 11:43:11.164303 | orchestrator | Friday 19 September 2025 11:41:58 +0000 (0:00:01.829) 0:00:24.073 ****** 2025-09-19 11:43:11.164326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE2025-09-19 11:43:11 | INFO  | Task c2d6eeb1-e6f4-4f9a-a1e2-a324a472f466 is in state SUCCESS 2025-09-19 11:43:11.164340 | orchestrator | ': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:43:11.164359 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.164381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:43:11.164393 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:11.164403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:43:11.164421 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:11.164430 | orchestrator | 2025-09-19 11:43:11.164440 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-19 11:43:11.164449 | orchestrator | Friday 19 September 2025 11:41:58 +0000 (0:00:00.604) 0:00:24.677 ****** 2025-09-19 11:43:11.164473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:43:11.164484 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.164494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:43:11.164511 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:11.164535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:43:11.164546 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:11.164556 | orchestrator | 2025-09-19 11:43:11.164565 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-19 11:43:11.164581 | orchestrator | Friday 19 September 2025 11:41:59 +0000 (0:00:00.786) 0:00:25.464 ****** 2025-09-19 11:43:11.164592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:43:11.164616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:43:11.164632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:43:11.164641 | orchestrator | 2025-09-19 11:43:11.164649 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 11:43:11.164657 | orchestrator | Friday 19 September 2025 11:42:01 +0000 (0:00:01.373) 0:00:26.837 ****** 2025-09-19 11:43:11.164665 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:11.164673 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:11.164680 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:11.164688 | orchestrator | 2025-09-19 11:43:11.164696 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 11:43:11.164708 | orchestrator | Friday 19 September 2025 11:42:01 +0000 (0:00:00.260) 0:00:27.097 ****** 2025-09-19 11:43:11.164716 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:43:11.164724 | orchestrator | 2025-09-19 11:43:11.164732 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-19 11:43:11.164745 | orchestrator | Friday 19 September 2025 11:42:01 +0000 (0:00:00.482) 0:00:27.580 ****** 2025-09-19 11:43:11.164753 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:43:11.164761 | orchestrator | 2025-09-19 11:43:11.164768 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-19 11:43:11.164776 | orchestrator | Friday 19 September 2025 11:42:04 +0000 (0:00:02.258) 0:00:29.839 ****** 2025-09-19 11:43:11.164784 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:43:11.164800 | orchestrator | 2025-09-19 11:43:11.164808 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-19 11:43:11.164816 | orchestrator | Friday 19 September 2025 11:42:06 +0000 (0:00:02.465) 0:00:32.304 ****** 2025-09-19 11:43:11.164823 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:43:11.164831 | orchestrator | 2025-09-19 11:43:11.164839 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 11:43:11.164847 | orchestrator | Friday 19 September 2025 11:42:21 +0000 (0:00:15.124) 0:00:47.429 ****** 2025-09-19 11:43:11.164855 | orchestrator | 2025-09-19 11:43:11.164862 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 11:43:11.164870 | orchestrator | Friday 19 September 2025 11:42:21 +0000 (0:00:00.083) 0:00:47.513 ****** 2025-09-19 11:43:11.164878 | orchestrator | 2025-09-19 11:43:11.164886 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 11:43:11.164894 | orchestrator | Friday 19 September 2025 11:42:21 +0000 (0:00:00.062) 0:00:47.575 ****** 2025-09-19 11:43:11.164901 | orchestrator | 2025-09-19 11:43:11.164909 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-19 11:43:11.164917 | orchestrator | Friday 19 September 2025 11:42:21 +0000 (0:00:00.069) 0:00:47.645 ****** 2025-09-19 11:43:11.164925 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:43:11.164933 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:43:11.164940 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:43:11.164948 | orchestrator | 2025-09-19 11:43:11.164956 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:43:11.164964 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-19 11:43:11.164972 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-19 11:43:11.164980 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-19 11:43:11.164988 | orchestrator | 2025-09-19 11:43:11.164996 | orchestrator | 2025-09-19 11:43:11.165004 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:43:11.165011 | orchestrator | Friday 19 September 2025 11:43:10 +0000 (0:00:48.495) 0:01:36.140 ****** 2025-09-19 11:43:11.165019 | orchestrator | =============================================================================== 2025-09-19 11:43:11.165027 | orchestrator | horizon : Restart horizon container ------------------------------------ 48.50s 2025-09-19 11:43:11.165035 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.12s 2025-09-19 11:43:11.165042 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.47s 2025-09-19 11:43:11.165050 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.39s 2025-09-19 11:43:11.165058 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.26s 2025-09-19 11:43:11.165066 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.09s 2025-09-19 11:43:11.165073 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.94s 2025-09-19 11:43:11.165081 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.83s 2025-09-19 11:43:11.165089 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.69s 2025-09-19 11:43:11.165096 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.37s 2025-09-19 11:43:11.165104 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.32s 2025-09-19 11:43:11.165112 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.79s 2025-09-19 11:43:11.165120 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2025-09-19 11:43:11.165127 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.60s 2025-09-19 11:43:11.165141 | orchestrator | horizon : Update policy file name --------------------------------------- 0.58s 2025-09-19 11:43:11.165149 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2025-09-19 11:43:11.165157 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2025-09-19 11:43:11.165165 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-09-19 11:43:11.165173 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2025-09-19 11:43:11.165180 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.50s 2025-09-19 11:43:11.165188 | orchestrator | 2025-09-19 11:43:11 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:11.165200 | orchestrator | 2025-09-19 11:43:11 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:11.165208 | orchestrator | 2025-09-19 11:43:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:14.196201 | orchestrator | 2025-09-19 11:43:14 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:14.197572 | orchestrator | 2025-09-19 11:43:14 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:14.197603 | orchestrator | 2025-09-19 11:43:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:17.239391 | orchestrator | 2025-09-19 11:43:17 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:17.241607 | orchestrator | 2025-09-19 11:43:17 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:17.241656 | orchestrator | 2025-09-19 11:43:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:20.288388 | orchestrator | 2025-09-19 11:43:20 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:20.288501 | orchestrator | 2025-09-19 11:43:20 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:20.288523 | orchestrator | 2025-09-19 11:43:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:23.338006 | orchestrator | 2025-09-19 11:43:23 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:23.338945 | orchestrator | 2025-09-19 11:43:23 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:23.338985 | orchestrator | 2025-09-19 11:43:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:26.381345 | orchestrator | 2025-09-19 11:43:26 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:26.383304 | orchestrator | 2025-09-19 11:43:26 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:26.383352 | orchestrator | 2025-09-19 11:43:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:29.418573 | orchestrator | 2025-09-19 11:43:29 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:29.418683 | orchestrator | 2025-09-19 11:43:29 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:29.418698 | orchestrator | 2025-09-19 11:43:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:32.462951 | orchestrator | 2025-09-19 11:43:32 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:32.465198 | orchestrator | 2025-09-19 11:43:32 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:32.465258 | orchestrator | 2025-09-19 11:43:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:35.495134 | orchestrator | 2025-09-19 11:43:35 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:35.496930 | orchestrator | 2025-09-19 11:43:35 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:35.497055 | orchestrator | 2025-09-19 11:43:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:38.540571 | orchestrator | 2025-09-19 11:43:38 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:38.543095 | orchestrator | 2025-09-19 11:43:38 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:38.543141 | orchestrator | 2025-09-19 11:43:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:41.581956 | orchestrator | 2025-09-19 11:43:41 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:41.583365 | orchestrator | 2025-09-19 11:43:41 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:41.583654 | orchestrator | 2025-09-19 11:43:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:44.626658 | orchestrator | 2025-09-19 11:43:44 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:44.628034 | orchestrator | 2025-09-19 11:43:44 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:44.628066 | orchestrator | 2025-09-19 11:43:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:47.665775 | orchestrator | 2025-09-19 11:43:47 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:47.667200 | orchestrator | 2025-09-19 11:43:47 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:47.667307 | orchestrator | 2025-09-19 11:43:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:50.706293 | orchestrator | 2025-09-19 11:43:50 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:50.707754 | orchestrator | 2025-09-19 11:43:50 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:50.708024 | orchestrator | 2025-09-19 11:43:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:53.755109 | orchestrator | 2025-09-19 11:43:53 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:53.756697 | orchestrator | 2025-09-19 11:43:53 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:53.756860 | orchestrator | 2025-09-19 11:43:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:56.799595 | orchestrator | 2025-09-19 11:43:56 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:56.800932 | orchestrator | 2025-09-19 11:43:56 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state STARTED 2025-09-19 11:43:56.800972 | orchestrator | 2025-09-19 11:43:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:59.841899 | orchestrator | 2025-09-19 11:43:59 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:43:59.843462 | orchestrator | 2025-09-19 11:43:59 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:43:59.845465 | orchestrator | 2025-09-19 11:43:59 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:43:59.847390 | orchestrator | 2025-09-19 11:43:59 | INFO  | Task 19747822-c9d9-4230-aa76-e182cb4ab016 is in state STARTED 2025-09-19 11:43:59.850275 | orchestrator | 2025-09-19 11:43:59 | INFO  | Task 12145553-f845-4098-a388-8e2eed30bc4a is in state SUCCESS 2025-09-19 11:43:59.850530 | orchestrator | 2025-09-19 11:43:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:02.883494 | orchestrator | 2025-09-19 11:44:02 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:02.883582 | orchestrator | 2025-09-19 11:44:02 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:02.884284 | orchestrator | 2025-09-19 11:44:02 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:44:02.886270 | orchestrator | 2025-09-19 11:44:02 | INFO  | Task 19747822-c9d9-4230-aa76-e182cb4ab016 is in state STARTED 2025-09-19 11:44:02.886316 | orchestrator | 2025-09-19 11:44:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:06.000832 | orchestrator | 2025-09-19 11:44:05 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:06.000933 | orchestrator | 2025-09-19 11:44:05 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:06.000956 | orchestrator | 2025-09-19 11:44:05 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state STARTED 2025-09-19 11:44:06.000968 | orchestrator | 2025-09-19 11:44:05 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:06.000979 | orchestrator | 2025-09-19 11:44:05 | INFO  | Task 4641bfcb-7f0c-412a-a50e-55464b265f00 is in state STARTED 2025-09-19 11:44:06.000990 | orchestrator | 2025-09-19 11:44:05 | INFO  | Task 19747822-c9d9-4230-aa76-e182cb4ab016 is in state SUCCESS 2025-09-19 11:44:06.001001 | orchestrator | 2025-09-19 11:44:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:08.987573 | orchestrator | 2025-09-19 11:44:08 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:08.987660 | orchestrator | 2025-09-19 11:44:08 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:08.989443 | orchestrator | 2025-09-19 11:44:08 | INFO  | Task a98104e8-27bd-4934-9b9c-0a0d14caf9c8 is in state SUCCESS 2025-09-19 11:44:08.991931 | orchestrator | 2025-09-19 11:44:08.991992 | orchestrator | 2025-09-19 11:44:08.992011 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-19 11:44:08.992030 | orchestrator | 2025-09-19 11:44:08.992048 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-19 11:44:08.992065 | orchestrator | Friday 19 September 2025 11:43:08 +0000 (0:00:00.248) 0:00:00.248 ****** 2025-09-19 11:44:08.992085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-19 11:44:08.992104 | orchestrator | 2025-09-19 11:44:08.992125 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-19 11:44:08.992160 | orchestrator | Friday 19 September 2025 11:43:08 +0000 (0:00:00.239) 0:00:00.488 ****** 2025-09-19 11:44:08.992179 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-19 11:44:08.992298 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-19 11:44:08.992318 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-19 11:44:08.992337 | orchestrator | 2025-09-19 11:44:08.992387 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-19 11:44:08.992407 | orchestrator | Friday 19 September 2025 11:43:09 +0000 (0:00:01.188) 0:00:01.677 ****** 2025-09-19 11:44:08.992426 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-19 11:44:08.992445 | orchestrator | 2025-09-19 11:44:08.992463 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-19 11:44:08.992482 | orchestrator | Friday 19 September 2025 11:43:10 +0000 (0:00:01.228) 0:00:02.905 ****** 2025-09-19 11:44:08.992532 | orchestrator | changed: [testbed-manager] 2025-09-19 11:44:08.992553 | orchestrator | 2025-09-19 11:44:08.992574 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-19 11:44:08.992593 | orchestrator | Friday 19 September 2025 11:43:11 +0000 (0:00:01.019) 0:00:03.925 ****** 2025-09-19 11:44:08.992612 | orchestrator | changed: [testbed-manager] 2025-09-19 11:44:08.992630 | orchestrator | 2025-09-19 11:44:08.992648 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-19 11:44:08.992666 | orchestrator | Friday 19 September 2025 11:43:12 +0000 (0:00:00.858) 0:00:04.783 ****** 2025-09-19 11:44:08.992685 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-19 11:44:08.992703 | orchestrator | ok: [testbed-manager] 2025-09-19 11:44:08.992721 | orchestrator | 2025-09-19 11:44:08.992737 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-19 11:44:08.992752 | orchestrator | Friday 19 September 2025 11:43:48 +0000 (0:00:36.264) 0:00:41.047 ****** 2025-09-19 11:44:08.992767 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-19 11:44:08.992783 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-19 11:44:08.992800 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-19 11:44:08.992815 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-19 11:44:08.992831 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-19 11:44:08.992847 | orchestrator | 2025-09-19 11:44:08.992864 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-19 11:44:08.992880 | orchestrator | Friday 19 September 2025 11:43:52 +0000 (0:00:03.766) 0:00:44.814 ****** 2025-09-19 11:44:08.992895 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-19 11:44:08.992911 | orchestrator | 2025-09-19 11:44:08.992928 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-19 11:44:08.992944 | orchestrator | Friday 19 September 2025 11:43:53 +0000 (0:00:00.424) 0:00:45.239 ****** 2025-09-19 11:44:08.992960 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:44:08.992976 | orchestrator | 2025-09-19 11:44:08.992992 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-19 11:44:08.993009 | orchestrator | Friday 19 September 2025 11:43:53 +0000 (0:00:00.114) 0:00:45.354 ****** 2025-09-19 11:44:08.993025 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:44:08.993040 | orchestrator | 2025-09-19 11:44:08.993056 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-19 11:44:08.993073 | orchestrator | Friday 19 September 2025 11:43:53 +0000 (0:00:00.288) 0:00:45.643 ****** 2025-09-19 11:44:08.993089 | orchestrator | changed: [testbed-manager] 2025-09-19 11:44:08.993106 | orchestrator | 2025-09-19 11:44:08.993121 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-19 11:44:08.993137 | orchestrator | Friday 19 September 2025 11:43:55 +0000 (0:00:01.662) 0:00:47.305 ****** 2025-09-19 11:44:08.993155 | orchestrator | changed: [testbed-manager] 2025-09-19 11:44:08.993171 | orchestrator | 2025-09-19 11:44:08.993214 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-19 11:44:08.993232 | orchestrator | Friday 19 September 2025 11:43:56 +0000 (0:00:00.746) 0:00:48.051 ****** 2025-09-19 11:44:08.993248 | orchestrator | changed: [testbed-manager] 2025-09-19 11:44:08.993264 | orchestrator | 2025-09-19 11:44:08.993280 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-19 11:44:08.993297 | orchestrator | Friday 19 September 2025 11:43:56 +0000 (0:00:00.646) 0:00:48.697 ****** 2025-09-19 11:44:08.993312 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-19 11:44:08.993329 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-19 11:44:08.993345 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-19 11:44:08.993362 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-19 11:44:08.993378 | orchestrator | 2025-09-19 11:44:08.993394 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:44:08.993424 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:44:08.993442 | orchestrator | 2025-09-19 11:44:08.993458 | orchestrator | 2025-09-19 11:44:08.993546 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:44:08.993568 | orchestrator | Friday 19 September 2025 11:43:57 +0000 (0:00:01.292) 0:00:49.990 ****** 2025-09-19 11:44:08.993584 | orchestrator | =============================================================================== 2025-09-19 11:44:08.993602 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.26s 2025-09-19 11:44:08.993618 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.77s 2025-09-19 11:44:08.993635 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.66s 2025-09-19 11:44:08.993651 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.29s 2025-09-19 11:44:08.993677 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.23s 2025-09-19 11:44:08.993692 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.19s 2025-09-19 11:44:08.993707 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.02s 2025-09-19 11:44:08.993723 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.86s 2025-09-19 11:44:08.993738 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.75s 2025-09-19 11:44:08.993755 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.65s 2025-09-19 11:44:08.993771 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.42s 2025-09-19 11:44:08.993788 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-09-19 11:44:08.993805 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2025-09-19 11:44:08.993821 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.11s 2025-09-19 11:44:08.993837 | orchestrator | 2025-09-19 11:44:08.993854 | orchestrator | 2025-09-19 11:44:08.993871 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:44:08.993887 | orchestrator | 2025-09-19 11:44:08.993903 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:44:08.993918 | orchestrator | Friday 19 September 2025 11:44:01 +0000 (0:00:00.161) 0:00:00.161 ****** 2025-09-19 11:44:08.993934 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:44:08.993951 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:44:08.993966 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:44:08.993982 | orchestrator | 2025-09-19 11:44:08.993999 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:44:08.994066 | orchestrator | Friday 19 September 2025 11:44:01 +0000 (0:00:00.265) 0:00:00.426 ****** 2025-09-19 11:44:08.994090 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-19 11:44:08.994108 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-19 11:44:08.994124 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-19 11:44:08.994139 | orchestrator | 2025-09-19 11:44:08.994156 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-19 11:44:08.994173 | orchestrator | 2025-09-19 11:44:08.994212 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-19 11:44:08.994230 | orchestrator | Friday 19 September 2025 11:44:02 +0000 (0:00:00.584) 0:00:01.010 ****** 2025-09-19 11:44:08.994247 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:44:08.994263 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:44:08.994279 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:44:08.994295 | orchestrator | 2025-09-19 11:44:08.994312 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:44:08.994329 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:44:08.994358 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:44:08.994375 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:44:08.994391 | orchestrator | 2025-09-19 11:44:08.994408 | orchestrator | 2025-09-19 11:44:08.994424 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:44:08.994439 | orchestrator | Friday 19 September 2025 11:44:03 +0000 (0:00:00.729) 0:00:01.740 ****** 2025-09-19 11:44:08.994449 | orchestrator | =============================================================================== 2025-09-19 11:44:08.994459 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.73s 2025-09-19 11:44:08.994471 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-09-19 11:44:08.994488 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2025-09-19 11:44:08.994504 | orchestrator | 2025-09-19 11:44:08.994520 | orchestrator | 2025-09-19 11:44:08.994536 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:44:08.994551 | orchestrator | 2025-09-19 11:44:08.994568 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:44:08.994585 | orchestrator | Friday 19 September 2025 11:41:34 +0000 (0:00:00.242) 0:00:00.242 ****** 2025-09-19 11:44:08.994601 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:44:08.994618 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:44:08.994634 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:44:08.994649 | orchestrator | 2025-09-19 11:44:08.994665 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:44:08.994681 | orchestrator | Friday 19 September 2025 11:41:34 +0000 (0:00:00.350) 0:00:00.593 ****** 2025-09-19 11:44:08.994697 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-19 11:44:08.994713 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-19 11:44:08.994729 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-19 11:44:08.994745 | orchestrator | 2025-09-19 11:44:08.994761 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-19 11:44:08.994777 | orchestrator | 2025-09-19 11:44:08.994854 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 11:44:08.994875 | orchestrator | Friday 19 September 2025 11:41:35 +0000 (0:00:00.441) 0:00:01.035 ****** 2025-09-19 11:44:08.994891 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:44:08.994908 | orchestrator | 2025-09-19 11:44:08.994924 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-19 11:44:08.994940 | orchestrator | Friday 19 September 2025 11:41:35 +0000 (0:00:00.617) 0:00:01.652 ****** 2025-09-19 11:44:08.994969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.995001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.995020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.995038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:44:08.995108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:44:08.995127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:44:08.995146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.995163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.995173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.995239 | orchestrator | 2025-09-19 11:44:08.995252 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-19 11:44:08.995262 | orchestrator | Friday 19 September 2025 11:41:37 +0000 (0:00:01.836) 0:00:03.489 ****** 2025-09-19 11:44:08.995271 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-19 11:44:08.995281 | orchestrator | 2025-09-19 11:44:08.995291 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-19 11:44:08.995300 | orchestrator | Friday 19 September 2025 11:41:38 +0000 (0:00:00.803) 0:00:04.292 ****** 2025-09-19 11:44:08.995310 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:44:08.995319 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:44:08.995329 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:44:08.995339 | orchestrator | 2025-09-19 11:44:08.995348 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-19 11:44:08.995358 | orchestrator | Friday 19 September 2025 11:41:39 +0000 (0:00:00.471) 0:00:04.764 ****** 2025-09-19 11:44:08.995367 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:44:08.995377 | orchestrator | 2025-09-19 11:44:08.995386 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 11:44:08.995396 | orchestrator | Friday 19 September 2025 11:41:39 +0000 (0:00:00.712) 0:00:05.477 ****** 2025-09-19 11:44:08.995406 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:44:08.995416 | orchestrator | 2025-09-19 11:44:08.995432 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-19 11:44:08.995442 | orchestrator | Friday 19 September 2025 11:41:40 +0000 (0:00:00.524) 0:00:06.002 ****** 2025-09-19 11:44:08.995458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.995476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.995488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.995499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:44:08.995519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:44:08.995534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:44:08.995551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.995561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.995571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.995581 | orchestrator | 2025-09-19 11:44:08.995591 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-19 11:44:08.995601 | orchestrator | Friday 19 September 2025 11:41:43 +0000 (0:00:03.181) 0:00:09.183 ****** 2025-09-19 11:44:08.995611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:44:08.995628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:44:08.995653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:44:08.995664 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:08.995673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:44:08.995682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:44:08.995690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:44:08.995698 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:44:08.995712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:44:08.995729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:44:08.995738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:44:08.995746 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:44:08.995754 | orchestrator | 2025-09-19 11:44:08.995762 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-19 11:44:08.995770 | orchestrator | Friday 19 September 2025 11:41:44 +0000 (0:00:00.791) 0:00:09.975 ****** 2025-09-19 11:44:08.995779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:44:08.995787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:44:08.995796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:44:08.995809 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:08.995826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:44:08.995835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:44:08.995844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:44:08.995852 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:44:08.995861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:44:08.995869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:44:08.995889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:44:08.995898 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:44:08.995906 | orchestrator | 2025-09-19 11:44:08.995914 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-19 11:44:08.995922 | orchestrator | Friday 19 September 2025 11:41:44 +0000 (0:00:00.735) 0:00:10.711 ****** 2025-09-19 11:44:08.995934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.995943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.995952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.995970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:44:08.995982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:44:08.995990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:44:08.995999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.996007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.996016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.996024 | orchestrator | 2025-09-19 11:44:08.996032 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-19 11:44:08.996044 | orchestrator | Friday 19 September 2025 11:41:48 +0000 (0:00:03.164) 0:00:13.875 ****** 2025-09-19 11:44:08.996057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.996070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:44:08.996078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.996087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:44:08.996096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.996109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:44:08.996126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.996135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.996143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.996151 | orchestrator | 2025-09-19 11:44:08.996159 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-19 11:44:08.996167 | orchestrator | Friday 19 September 2025 11:41:53 +0000 (0:00:05.503) 0:00:19.378 ****** 2025-09-19 11:44:08.996175 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:44:08.996200 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:44:08.996208 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:44:08.996216 | orchestrator | 2025-09-19 11:44:08.996224 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-19 11:44:08.996232 | orchestrator | Friday 19 September 2025 11:41:55 +0000 (0:00:01.512) 0:00:20.891 ****** 2025-09-19 11:44:08.996239 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:08.996247 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:44:08.996255 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:44:08.996263 | orchestrator | 2025-09-19 11:44:08.996271 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-19 11:44:08.996279 | orchestrator | Friday 19 September 2025 11:41:55 +0000 (0:00:00.517) 0:00:21.409 ****** 2025-09-19 11:44:08.996291 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:08.996299 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:44:08.996307 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:44:08.996314 | orchestrator | 2025-09-19 11:44:08.996323 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-19 11:44:08.996330 | orchestrator | Friday 19 September 2025 11:41:55 +0000 (0:00:00.293) 0:00:21.702 ****** 2025-09-19 11:44:08.996338 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:08.996346 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:44:08.996354 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:44:08.996361 | orchestrator | 2025-09-19 11:44:08.996369 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-19 11:44:08.996377 | orchestrator | Friday 19 September 2025 11:41:56 +0000 (0:00:00.513) 0:00:22.216 ****** 2025-09-19 11:44:08.996386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.996404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:44:08.996413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.996422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:44:08.996438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.996446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:44:08.996461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.996472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.996481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.996489 | orchestrator | 2025-09-19 11:44:08.996497 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 11:44:08.996505 | orchestrator | Friday 19 September 2025 11:41:59 +0000 (0:00:02.568) 0:00:24.784 ****** 2025-09-19 11:44:08.996513 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:08.996525 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:44:08.996533 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:44:08.996541 | orchestrator | 2025-09-19 11:44:08.996549 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-19 11:44:08.996557 | orchestrator | Friday 19 September 2025 11:41:59 +0000 (0:00:00.298) 0:00:25.082 ****** 2025-09-19 11:44:08.996565 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 11:44:08.996573 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 11:44:08.996581 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 11:44:08.996589 | orchestrator | 2025-09-19 11:44:08.996597 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-19 11:44:08.996605 | orchestrator | Friday 19 September 2025 11:42:00 +0000 (0:00:01.492) 0:00:26.575 ****** 2025-09-19 11:44:08.996613 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:44:08.996621 | orchestrator | 2025-09-19 11:44:08.996629 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-19 11:44:08.996637 | orchestrator | Friday 19 September 2025 11:42:01 +0000 (0:00:00.807) 0:00:27.382 ****** 2025-09-19 11:44:08.996645 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:08.996652 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:44:08.996660 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:44:08.996668 | orchestrator | 2025-09-19 11:44:08.996676 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-19 11:44:08.996684 | orchestrator | Friday 19 September 2025 11:42:02 +0000 (0:00:00.638) 0:00:28.021 ****** 2025-09-19 11:44:08.996692 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:44:08.996700 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 11:44:08.996708 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 11:44:08.996715 | orchestrator | 2025-09-19 11:44:08.996723 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-19 11:44:08.996731 | orchestrator | Friday 19 September 2025 11:42:03 +0000 (0:00:00.907) 0:00:28.929 ****** 2025-09-19 11:44:08.996739 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:44:08.996747 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:44:08.996755 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:44:08.996763 | orchestrator | 2025-09-19 11:44:08.996771 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-19 11:44:08.996779 | orchestrator | Friday 19 September 2025 11:42:03 +0000 (0:00:00.277) 0:00:29.206 ****** 2025-09-19 11:44:08.996787 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 11:44:08.996795 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 11:44:08.996803 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 11:44:08.996811 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 11:44:08.996819 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 11:44:08.996831 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 11:44:08.996839 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 11:44:08.996847 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 11:44:08.996855 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 11:44:08.996863 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 11:44:08.996878 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 11:44:08.996886 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 11:44:08.996894 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 11:44:08.996902 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 11:44:08.996910 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 11:44:08.996918 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 11:44:08.996926 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 11:44:08.996934 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 11:44:08.996942 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 11:44:08.996950 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 11:44:08.996958 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 11:44:08.996966 | orchestrator | 2025-09-19 11:44:08.996974 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-19 11:44:08.996982 | orchestrator | Friday 19 September 2025 11:42:11 +0000 (0:00:08.453) 0:00:37.660 ****** 2025-09-19 11:44:08.996989 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 11:44:08.996997 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 11:44:08.997005 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 11:44:08.997013 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 11:44:08.997021 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 11:44:08.997029 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 11:44:08.997037 | orchestrator | 2025-09-19 11:44:08.997044 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-19 11:44:08.997052 | orchestrator | Friday 19 September 2025 11:42:14 +0000 (0:00:02.701) 0:00:40.362 ****** 2025-09-19 11:44:08.997061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.997075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.997091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:44:08.997101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:44:08.997109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:44:08.997117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:44:08.997126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.997142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.997154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:44:08.997163 | orchestrator | 2025-09-19 11:44:08.997171 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 11:44:08.997179 | orchestrator | Friday 19 September 2025 11:42:16 +0000 (0:00:02.308) 0:00:42.670 ****** 2025-09-19 11:44:08.997208 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:08.997220 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:44:08.997232 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:44:08.997245 | orchestrator | 2025-09-19 11:44:08.997259 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-19 11:44:08.997273 | orchestrator | Friday 19 September 2025 11:42:17 +0000 (0:00:00.265) 0:00:42.935 ****** 2025-09-19 11:44:08.997286 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:44:08.997296 | orchestrator | 2025-09-19 11:44:08.997304 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-19 11:44:08.997312 | orchestrator | Friday 19 September 2025 11:42:19 +0000 (0:00:02.307) 0:00:45.243 ****** 2025-09-19 11:44:08.997320 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:44:08.997328 | orchestrator | 2025-09-19 11:44:08.997335 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-19 11:44:08.997343 | orchestrator | Friday 19 September 2025 11:42:21 +0000 (0:00:02.219) 0:00:47.462 ****** 2025-09-19 11:44:08.997351 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:44:08.997359 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:44:08.997367 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:44:08.997374 | orchestrator | 2025-09-19 11:44:08.997383 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-19 11:44:08.997390 | orchestrator | Friday 19 September 2025 11:42:22 +0000 (0:00:00.832) 0:00:48.295 ****** 2025-09-19 11:44:08.997398 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:44:08.997406 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:44:08.997414 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:44:08.997422 | orchestrator | 2025-09-19 11:44:08.997430 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-19 11:44:08.997437 | orchestrator | Friday 19 September 2025 11:42:23 +0000 (0:00:00.627) 0:00:48.923 ****** 2025-09-19 11:44:08.997445 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:08.997453 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:44:08.997461 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:44:08.997469 | orchestrator | 2025-09-19 11:44:08.997477 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-19 11:44:08.997484 | orchestrator | Friday 19 September 2025 11:42:23 +0000 (0:00:00.466) 0:00:49.389 ****** 2025-09-19 11:44:08.997492 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:44:08.997506 | orchestrator | 2025-09-19 11:44:08.997514 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-19 11:44:08.997522 | orchestrator | Friday 19 September 2025 11:42:37 +0000 (0:00:14.230) 0:01:03.619 ****** 2025-09-19 11:44:08.997530 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:44:08.997537 | orchestrator | 2025-09-19 11:44:08.997545 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 11:44:08.997553 | orchestrator | Friday 19 September 2025 11:42:47 +0000 (0:00:10.059) 0:01:13.679 ****** 2025-09-19 11:44:08.997561 | orchestrator | 2025-09-19 11:44:08.997569 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 11:44:08.997577 | orchestrator | Friday 19 September 2025 11:42:48 +0000 (0:00:00.067) 0:01:13.746 ****** 2025-09-19 11:44:08.997585 | orchestrator | 2025-09-19 11:44:08.997593 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 11:44:08.997600 | orchestrator | Friday 19 September 2025 11:42:48 +0000 (0:00:00.067) 0:01:13.813 ****** 2025-09-19 11:44:08.997608 | orchestrator | 2025-09-19 11:44:08.997616 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-19 11:44:08.997624 | orchestrator | Friday 19 September 2025 11:42:48 +0000 (0:00:00.068) 0:01:13.882 ****** 2025-09-19 11:44:08.997632 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:44:08.997639 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:44:08.997647 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:44:08.997655 | orchestrator | 2025-09-19 11:44:08.997663 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-19 11:44:08.997671 | orchestrator | Friday 19 September 2025 11:43:07 +0000 (0:00:19.275) 0:01:33.157 ****** 2025-09-19 11:44:08.997679 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:44:08.997686 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:44:08.997694 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:44:08.997702 | orchestrator | 2025-09-19 11:44:08.997710 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-19 11:44:08.997717 | orchestrator | Friday 19 September 2025 11:43:12 +0000 (0:00:04.761) 0:01:37.919 ****** 2025-09-19 11:44:08.997725 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:44:08.997733 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:44:08.997746 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:44:08.997754 | orchestrator | 2025-09-19 11:44:08.997762 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 11:44:08.997770 | orchestrator | Friday 19 September 2025 11:43:18 +0000 (0:00:06.426) 0:01:44.345 ****** 2025-09-19 11:44:08.997778 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:44:08.997786 | orchestrator | 2025-09-19 11:44:08.997794 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-19 11:44:08.997802 | orchestrator | Friday 19 September 2025 11:43:19 +0000 (0:00:00.778) 0:01:45.124 ****** 2025-09-19 11:44:08.997810 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:44:08.997818 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:44:08.997830 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:44:08.997838 | orchestrator | 2025-09-19 11:44:08.997846 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-19 11:44:08.997854 | orchestrator | Friday 19 September 2025 11:43:20 +0000 (0:00:00.818) 0:01:45.942 ****** 2025-09-19 11:44:08.997862 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:44:08.997870 | orchestrator | 2025-09-19 11:44:08.997878 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-19 11:44:08.997886 | orchestrator | Friday 19 September 2025 11:43:21 +0000 (0:00:01.789) 0:01:47.732 ****** 2025-09-19 11:44:08.997894 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-19 11:44:08.997902 | orchestrator | 2025-09-19 11:44:08.997909 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-19 11:44:08.997917 | orchestrator | Friday 19 September 2025 11:43:33 +0000 (0:00:11.141) 0:01:58.874 ****** 2025-09-19 11:44:08.997930 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-19 11:44:08.997938 | orchestrator | 2025-09-19 11:44:08.997946 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-19 11:44:08.997954 | orchestrator | Friday 19 September 2025 11:43:56 +0000 (0:00:23.295) 0:02:22.169 ****** 2025-09-19 11:44:08.997961 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-19 11:44:08.997970 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-19 11:44:08.997977 | orchestrator | 2025-09-19 11:44:08.997985 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-19 11:44:08.997993 | orchestrator | Friday 19 September 2025 11:44:03 +0000 (0:00:06.622) 0:02:28.792 ****** 2025-09-19 11:44:08.998001 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:08.998009 | orchestrator | 2025-09-19 11:44:08.998041 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-19 11:44:08.998051 | orchestrator | Friday 19 September 2025 11:44:03 +0000 (0:00:00.108) 0:02:28.900 ****** 2025-09-19 11:44:08.998059 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:08.998067 | orchestrator | 2025-09-19 11:44:08.998074 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-19 11:44:08.998082 | orchestrator | Friday 19 September 2025 11:44:03 +0000 (0:00:00.131) 0:02:29.032 ****** 2025-09-19 11:44:08.998090 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:08.998098 | orchestrator | 2025-09-19 11:44:08.998106 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-19 11:44:08.998114 | orchestrator | Friday 19 September 2025 11:44:03 +0000 (0:00:00.100) 0:02:29.132 ****** 2025-09-19 11:44:08.998122 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:08.998130 | orchestrator | 2025-09-19 11:44:08.998138 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-19 11:44:08.998146 | orchestrator | Friday 19 September 2025 11:44:03 +0000 (0:00:00.572) 0:02:29.705 ****** 2025-09-19 11:44:08.998154 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:44:08.998161 | orchestrator | 2025-09-19 11:44:08.998169 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 11:44:08.998177 | orchestrator | Friday 19 September 2025 11:44:07 +0000 (0:00:03.664) 0:02:33.369 ****** 2025-09-19 11:44:08.998228 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:08.998237 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:44:08.998244 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:44:08.998252 | orchestrator | 2025-09-19 11:44:08.998260 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:44:08.998268 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-19 11:44:08.998277 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-19 11:44:08.998285 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-19 11:44:08.998293 | orchestrator | 2025-09-19 11:44:08.998306 | orchestrator | 2025-09-19 11:44:08.998320 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:44:08.998331 | orchestrator | Friday 19 September 2025 11:44:08 +0000 (0:00:00.455) 0:02:33.824 ****** 2025-09-19 11:44:08.998344 | orchestrator | =============================================================================== 2025-09-19 11:44:08.998358 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.30s 2025-09-19 11:44:08.998372 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 19.28s 2025-09-19 11:44:08.998385 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.23s 2025-09-19 11:44:08.998402 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.14s 2025-09-19 11:44:08.998410 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.06s 2025-09-19 11:44:08.998423 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.45s 2025-09-19 11:44:08.998432 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.62s 2025-09-19 11:44:08.998446 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.43s 2025-09-19 11:44:08.998459 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.50s 2025-09-19 11:44:08.998473 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.76s 2025-09-19 11:44:08.998487 | orchestrator | keystone : Creating default user role ----------------------------------- 3.66s 2025-09-19 11:44:08.998501 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.18s 2025-09-19 11:44:08.998514 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.16s 2025-09-19 11:44:08.998522 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.70s 2025-09-19 11:44:08.998530 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.57s 2025-09-19 11:44:08.998537 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.31s 2025-09-19 11:44:08.998545 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.31s 2025-09-19 11:44:08.998553 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.22s 2025-09-19 11:44:08.998561 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.84s 2025-09-19 11:44:08.998568 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.79s 2025-09-19 11:44:08.998576 | orchestrator | 2025-09-19 11:44:08 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:08.998584 | orchestrator | 2025-09-19 11:44:08 | INFO  | Task 4641bfcb-7f0c-412a-a50e-55464b265f00 is in state STARTED 2025-09-19 11:44:08.998592 | orchestrator | 2025-09-19 11:44:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:12.028916 | orchestrator | 2025-09-19 11:44:12 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:12.029931 | orchestrator | 2025-09-19 11:44:12 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:12.030397 | orchestrator | 2025-09-19 11:44:12 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:44:12.031154 | orchestrator | 2025-09-19 11:44:12 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:12.031681 | orchestrator | 2025-09-19 11:44:12 | INFO  | Task 4641bfcb-7f0c-412a-a50e-55464b265f00 is in state STARTED 2025-09-19 11:44:12.031716 | orchestrator | 2025-09-19 11:44:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:15.060864 | orchestrator | 2025-09-19 11:44:15 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:15.061079 | orchestrator | 2025-09-19 11:44:15 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:15.062109 | orchestrator | 2025-09-19 11:44:15 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:44:15.062923 | orchestrator | 2025-09-19 11:44:15 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:15.063769 | orchestrator | 2025-09-19 11:44:15 | INFO  | Task 4641bfcb-7f0c-412a-a50e-55464b265f00 is in state STARTED 2025-09-19 11:44:15.064361 | orchestrator | 2025-09-19 11:44:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:18.108803 | orchestrator | 2025-09-19 11:44:18 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:18.108879 | orchestrator | 2025-09-19 11:44:18 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:18.109528 | orchestrator | 2025-09-19 11:44:18 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:44:18.110270 | orchestrator | 2025-09-19 11:44:18 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:18.111684 | orchestrator | 2025-09-19 11:44:18 | INFO  | Task 4641bfcb-7f0c-412a-a50e-55464b265f00 is in state STARTED 2025-09-19 11:44:18.111706 | orchestrator | 2025-09-19 11:44:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:21.148329 | orchestrator | 2025-09-19 11:44:21 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:21.149038 | orchestrator | 2025-09-19 11:44:21 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:21.150355 | orchestrator | 2025-09-19 11:44:21 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:44:21.151201 | orchestrator | 2025-09-19 11:44:21 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:21.152256 | orchestrator | 2025-09-19 11:44:21 | INFO  | Task 4641bfcb-7f0c-412a-a50e-55464b265f00 is in state STARTED 2025-09-19 11:44:21.152286 | orchestrator | 2025-09-19 11:44:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:24.196014 | orchestrator | 2025-09-19 11:44:24 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:24.196115 | orchestrator | 2025-09-19 11:44:24 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:24.197330 | orchestrator | 2025-09-19 11:44:24 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:44:24.198627 | orchestrator | 2025-09-19 11:44:24 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:24.200457 | orchestrator | 2025-09-19 11:44:24 | INFO  | Task 4641bfcb-7f0c-412a-a50e-55464b265f00 is in state STARTED 2025-09-19 11:44:24.200489 | orchestrator | 2025-09-19 11:44:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:27.246895 | orchestrator | 2025-09-19 11:44:27 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:27.246985 | orchestrator | 2025-09-19 11:44:27 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:27.247471 | orchestrator | 2025-09-19 11:44:27 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:44:27.248122 | orchestrator | 2025-09-19 11:44:27 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:27.248853 | orchestrator | 2025-09-19 11:44:27 | INFO  | Task 4641bfcb-7f0c-412a-a50e-55464b265f00 is in state STARTED 2025-09-19 11:44:27.248878 | orchestrator | 2025-09-19 11:44:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:30.278721 | orchestrator | 2025-09-19 11:44:30 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:30.279469 | orchestrator | 2025-09-19 11:44:30 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:30.280266 | orchestrator | 2025-09-19 11:44:30 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:44:30.281215 | orchestrator | 2025-09-19 11:44:30 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:30.282603 | orchestrator | 2025-09-19 11:44:30 | INFO  | Task 4641bfcb-7f0c-412a-a50e-55464b265f00 is in state STARTED 2025-09-19 11:44:30.282656 | orchestrator | 2025-09-19 11:44:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:33.312122 | orchestrator | 2025-09-19 11:44:33 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:33.312252 | orchestrator | 2025-09-19 11:44:33 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:33.313686 | orchestrator | 2025-09-19 11:44:33 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:44:33.314118 | orchestrator | 2025-09-19 11:44:33 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:33.314908 | orchestrator | 2025-09-19 11:44:33 | INFO  | Task 4641bfcb-7f0c-412a-a50e-55464b265f00 is in state STARTED 2025-09-19 11:44:33.314934 | orchestrator | 2025-09-19 11:44:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:36.357897 | orchestrator | 2025-09-19 11:44:36 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:36.357966 | orchestrator | 2025-09-19 11:44:36 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:36.357975 | orchestrator | 2025-09-19 11:44:36 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:44:36.357984 | orchestrator | 2025-09-19 11:44:36 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:36.357992 | orchestrator | 2025-09-19 11:44:36 | INFO  | Task 4641bfcb-7f0c-412a-a50e-55464b265f00 is in state STARTED 2025-09-19 11:44:36.358000 | orchestrator | 2025-09-19 11:44:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:39.379927 | orchestrator | 2025-09-19 11:44:39 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:39.380701 | orchestrator | 2025-09-19 11:44:39 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:39.382959 | orchestrator | 2025-09-19 11:44:39 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:44:39.387840 | orchestrator | 2025-09-19 11:44:39 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:39.389244 | orchestrator | 2025-09-19 11:44:39 | INFO  | Task 4641bfcb-7f0c-412a-a50e-55464b265f00 is in state STARTED 2025-09-19 11:44:39.389264 | orchestrator | 2025-09-19 11:44:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:42.434856 | orchestrator | 2025-09-19 11:44:42 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:42.435680 | orchestrator | 2025-09-19 11:44:42 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:42.437224 | orchestrator | 2025-09-19 11:44:42 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:44:42.438846 | orchestrator | 2025-09-19 11:44:42 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:42.440661 | orchestrator | 2025-09-19 11:44:42 | INFO  | Task 4641bfcb-7f0c-412a-a50e-55464b265f00 is in state STARTED 2025-09-19 11:44:42.440703 | orchestrator | 2025-09-19 11:44:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:45.485716 | orchestrator | 2025-09-19 11:44:45 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:45.485797 | orchestrator | 2025-09-19 11:44:45 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:45.485812 | orchestrator | 2025-09-19 11:44:45 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:44:45.485848 | orchestrator | 2025-09-19 11:44:45 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:45.485860 | orchestrator | 2025-09-19 11:44:45 | INFO  | Task 4641bfcb-7f0c-412a-a50e-55464b265f00 is in state SUCCESS 2025-09-19 11:44:45.485870 | orchestrator | 2025-09-19 11:44:45 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:44:45.485881 | orchestrator | 2025-09-19 11:44:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:48.501902 | orchestrator | 2025-09-19 11:44:48 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:48.502012 | orchestrator | 2025-09-19 11:44:48 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:48.504426 | orchestrator | 2025-09-19 11:44:48 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:44:48.504707 | orchestrator | 2025-09-19 11:44:48 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:48.506474 | orchestrator | 2025-09-19 11:44:48 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:44:48.506524 | orchestrator | 2025-09-19 11:44:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:51.573341 | orchestrator | 2025-09-19 11:44:51 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:51.573865 | orchestrator | 2025-09-19 11:44:51 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:51.574426 | orchestrator | 2025-09-19 11:44:51 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:44:51.574971 | orchestrator | 2025-09-19 11:44:51 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:51.577702 | orchestrator | 2025-09-19 11:44:51 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:44:51.577941 | orchestrator | 2025-09-19 11:44:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:54.604286 | orchestrator | 2025-09-19 11:44:54 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:54.604382 | orchestrator | 2025-09-19 11:44:54 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:54.604859 | orchestrator | 2025-09-19 11:44:54 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:44:54.605440 | orchestrator | 2025-09-19 11:44:54 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:54.606154 | orchestrator | 2025-09-19 11:44:54 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:44:54.606180 | orchestrator | 2025-09-19 11:44:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:57.630590 | orchestrator | 2025-09-19 11:44:57 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:44:57.630765 | orchestrator | 2025-09-19 11:44:57 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:44:57.631857 | orchestrator | 2025-09-19 11:44:57 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:44:57.632427 | orchestrator | 2025-09-19 11:44:57 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:44:57.632943 | orchestrator | 2025-09-19 11:44:57 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:44:57.632966 | orchestrator | 2025-09-19 11:44:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:00.656170 | orchestrator | 2025-09-19 11:45:00 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:00.656523 | orchestrator | 2025-09-19 11:45:00 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:45:00.657102 | orchestrator | 2025-09-19 11:45:00 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:00.658459 | orchestrator | 2025-09-19 11:45:00 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:00.659144 | orchestrator | 2025-09-19 11:45:00 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:00.659170 | orchestrator | 2025-09-19 11:45:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:03.688490 | orchestrator | 2025-09-19 11:45:03 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:03.688863 | orchestrator | 2025-09-19 11:45:03 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:45:03.689823 | orchestrator | 2025-09-19 11:45:03 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:03.690432 | orchestrator | 2025-09-19 11:45:03 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:03.692295 | orchestrator | 2025-09-19 11:45:03 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:03.692319 | orchestrator | 2025-09-19 11:45:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:06.732492 | orchestrator | 2025-09-19 11:45:06 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:06.732803 | orchestrator | 2025-09-19 11:45:06 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:45:06.733529 | orchestrator | 2025-09-19 11:45:06 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:06.733958 | orchestrator | 2025-09-19 11:45:06 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:06.734562 | orchestrator | 2025-09-19 11:45:06 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:06.734586 | orchestrator | 2025-09-19 11:45:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:09.759547 | orchestrator | 2025-09-19 11:45:09 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:09.759765 | orchestrator | 2025-09-19 11:45:09 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:45:09.760497 | orchestrator | 2025-09-19 11:45:09 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:09.761046 | orchestrator | 2025-09-19 11:45:09 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:09.761821 | orchestrator | 2025-09-19 11:45:09 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:09.761849 | orchestrator | 2025-09-19 11:45:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:12.806191 | orchestrator | 2025-09-19 11:45:12 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:12.806692 | orchestrator | 2025-09-19 11:45:12 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:45:12.807062 | orchestrator | 2025-09-19 11:45:12 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:12.807686 | orchestrator | 2025-09-19 11:45:12 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:12.808469 | orchestrator | 2025-09-19 11:45:12 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:12.808522 | orchestrator | 2025-09-19 11:45:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:15.855001 | orchestrator | 2025-09-19 11:45:15 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:15.855160 | orchestrator | 2025-09-19 11:45:15 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:45:15.855960 | orchestrator | 2025-09-19 11:45:15 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:15.856565 | orchestrator | 2025-09-19 11:45:15 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:15.857175 | orchestrator | 2025-09-19 11:45:15 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:15.857336 | orchestrator | 2025-09-19 11:45:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:18.878550 | orchestrator | 2025-09-19 11:45:18 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:18.878743 | orchestrator | 2025-09-19 11:45:18 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:45:18.879793 | orchestrator | 2025-09-19 11:45:18 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:18.880330 | orchestrator | 2025-09-19 11:45:18 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:18.880884 | orchestrator | 2025-09-19 11:45:18 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:18.880906 | orchestrator | 2025-09-19 11:45:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:21.903377 | orchestrator | 2025-09-19 11:45:21 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:21.903465 | orchestrator | 2025-09-19 11:45:21 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:45:21.903818 | orchestrator | 2025-09-19 11:45:21 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:21.904293 | orchestrator | 2025-09-19 11:45:21 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:21.904844 | orchestrator | 2025-09-19 11:45:21 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:21.904866 | orchestrator | 2025-09-19 11:45:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:24.925751 | orchestrator | 2025-09-19 11:45:24 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:24.927492 | orchestrator | 2025-09-19 11:45:24 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:45:24.927903 | orchestrator | 2025-09-19 11:45:24 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:24.928425 | orchestrator | 2025-09-19 11:45:24 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:24.928975 | orchestrator | 2025-09-19 11:45:24 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:24.929003 | orchestrator | 2025-09-19 11:45:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:27.953519 | orchestrator | 2025-09-19 11:45:27 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:27.953600 | orchestrator | 2025-09-19 11:45:27 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:45:27.953622 | orchestrator | 2025-09-19 11:45:27 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:27.954127 | orchestrator | 2025-09-19 11:45:27 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:27.954539 | orchestrator | 2025-09-19 11:45:27 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:27.954561 | orchestrator | 2025-09-19 11:45:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:31.155960 | orchestrator | 2025-09-19 11:45:30 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:31.156059 | orchestrator | 2025-09-19 11:45:30 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:45:31.156124 | orchestrator | 2025-09-19 11:45:30 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:31.156136 | orchestrator | 2025-09-19 11:45:30 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:31.156146 | orchestrator | 2025-09-19 11:45:30 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:31.156157 | orchestrator | 2025-09-19 11:45:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:34.002922 | orchestrator | 2025-09-19 11:45:34 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:34.003129 | orchestrator | 2025-09-19 11:45:34 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state STARTED 2025-09-19 11:45:34.003588 | orchestrator | 2025-09-19 11:45:34 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:34.004044 | orchestrator | 2025-09-19 11:45:34 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:34.004582 | orchestrator | 2025-09-19 11:45:34 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:34.004609 | orchestrator | 2025-09-19 11:45:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:37.035992 | orchestrator | 2025-09-19 11:45:37 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:37.036322 | orchestrator | 2025-09-19 11:45:37 | INFO  | Task c806e6aa-1aae-44f8-87d6-c452bca373a2 is in state SUCCESS 2025-09-19 11:45:37.036511 | orchestrator | 2025-09-19 11:45:37.036533 | orchestrator | 2025-09-19 11:45:37.036546 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:45:37.036566 | orchestrator | 2025-09-19 11:45:37.036583 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:45:37.036599 | orchestrator | Friday 19 September 2025 11:44:08 +0000 (0:00:00.359) 0:00:00.359 ****** 2025-09-19 11:45:37.036731 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:45:37.036753 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:45:37.036772 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:45:37.036787 | orchestrator | ok: [testbed-manager] 2025-09-19 11:45:37.036798 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:45:37.036809 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:45:37.036819 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:45:37.036830 | orchestrator | 2025-09-19 11:45:37.036841 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:45:37.036853 | orchestrator | Friday 19 September 2025 11:44:09 +0000 (0:00:00.950) 0:00:01.309 ****** 2025-09-19 11:45:37.036863 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-19 11:45:37.036875 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-19 11:45:37.036886 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-19 11:45:37.036897 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-19 11:45:37.036908 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-19 11:45:37.036948 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-19 11:45:37.036969 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-19 11:45:37.036988 | orchestrator | 2025-09-19 11:45:37.037007 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-19 11:45:37.037026 | orchestrator | 2025-09-19 11:45:37.037046 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-19 11:45:37.037100 | orchestrator | Friday 19 September 2025 11:44:10 +0000 (0:00:01.020) 0:00:02.329 ****** 2025-09-19 11:45:37.037113 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:45:37.037125 | orchestrator | 2025-09-19 11:45:37.037140 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-19 11:45:37.037159 | orchestrator | Friday 19 September 2025 11:44:12 +0000 (0:00:02.104) 0:00:04.434 ****** 2025-09-19 11:45:37.037178 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-19 11:45:37.037198 | orchestrator | 2025-09-19 11:45:37.037216 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-19 11:45:37.037227 | orchestrator | Friday 19 September 2025 11:44:16 +0000 (0:00:04.204) 0:00:08.639 ****** 2025-09-19 11:45:37.037238 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-19 11:45:37.037250 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-19 11:45:37.037261 | orchestrator | 2025-09-19 11:45:37.037272 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-19 11:45:37.037282 | orchestrator | Friday 19 September 2025 11:44:24 +0000 (0:00:07.333) 0:00:15.973 ****** 2025-09-19 11:45:37.037293 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:45:37.037304 | orchestrator | 2025-09-19 11:45:37.037314 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-19 11:45:37.037326 | orchestrator | Friday 19 September 2025 11:44:28 +0000 (0:00:03.789) 0:00:19.762 ****** 2025-09-19 11:45:37.037338 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:45:37.037351 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-19 11:45:37.037363 | orchestrator | 2025-09-19 11:45:37.037375 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-19 11:45:37.037389 | orchestrator | Friday 19 September 2025 11:44:32 +0000 (0:00:04.063) 0:00:23.828 ****** 2025-09-19 11:45:37.037401 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:45:37.037414 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-19 11:45:37.037426 | orchestrator | 2025-09-19 11:45:37.037439 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-19 11:45:37.037451 | orchestrator | Friday 19 September 2025 11:44:37 +0000 (0:00:05.550) 0:00:29.379 ****** 2025-09-19 11:45:37.037464 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-19 11:45:37.037476 | orchestrator | 2025-09-19 11:45:37.037488 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:45:37.037501 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:45:37.037515 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:45:37.037528 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:45:37.037553 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:45:37.037575 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:45:37.037603 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:45:37.037616 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:45:37.037628 | orchestrator | 2025-09-19 11:45:37.037641 | orchestrator | 2025-09-19 11:45:37.037653 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:45:37.037666 | orchestrator | Friday 19 September 2025 11:44:42 +0000 (0:00:05.211) 0:00:34.590 ****** 2025-09-19 11:45:37.037679 | orchestrator | =============================================================================== 2025-09-19 11:45:37.037691 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.33s 2025-09-19 11:45:37.037704 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.55s 2025-09-19 11:45:37.037717 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.21s 2025-09-19 11:45:37.037729 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.20s 2025-09-19 11:45:37.037740 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.07s 2025-09-19 11:45:37.037750 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.79s 2025-09-19 11:45:37.037761 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.11s 2025-09-19 11:45:37.037772 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.02s 2025-09-19 11:45:37.037782 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.95s 2025-09-19 11:45:37.037793 | orchestrator | 2025-09-19 11:45:37.037804 | orchestrator | 2025-09-19 11:45:37.037815 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-19 11:45:37.037825 | orchestrator | 2025-09-19 11:45:37.037836 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-19 11:45:37.037847 | orchestrator | Friday 19 September 2025 11:44:01 +0000 (0:00:00.266) 0:00:00.266 ****** 2025-09-19 11:45:37.037857 | orchestrator | changed: [testbed-manager] 2025-09-19 11:45:37.037868 | orchestrator | 2025-09-19 11:45:37.037879 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-19 11:45:37.037889 | orchestrator | Friday 19 September 2025 11:44:03 +0000 (0:00:01.507) 0:00:01.773 ****** 2025-09-19 11:45:37.037900 | orchestrator | changed: [testbed-manager] 2025-09-19 11:45:37.037910 | orchestrator | 2025-09-19 11:45:37.037921 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-19 11:45:37.037932 | orchestrator | Friday 19 September 2025 11:44:04 +0000 (0:00:00.908) 0:00:02.682 ****** 2025-09-19 11:45:37.037942 | orchestrator | changed: [testbed-manager] 2025-09-19 11:45:37.037953 | orchestrator | 2025-09-19 11:45:37.037964 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-19 11:45:37.037975 | orchestrator | Friday 19 September 2025 11:44:05 +0000 (0:00:01.058) 0:00:03.740 ****** 2025-09-19 11:45:37.037986 | orchestrator | changed: [testbed-manager] 2025-09-19 11:45:37.037996 | orchestrator | 2025-09-19 11:45:37.038007 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-19 11:45:37.038096 | orchestrator | Friday 19 September 2025 11:44:07 +0000 (0:00:01.673) 0:00:05.414 ****** 2025-09-19 11:45:37.038117 | orchestrator | changed: [testbed-manager] 2025-09-19 11:45:37.038138 | orchestrator | 2025-09-19 11:45:37.038150 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-19 11:45:37.038161 | orchestrator | Friday 19 September 2025 11:44:07 +0000 (0:00:00.883) 0:00:06.297 ****** 2025-09-19 11:45:37.038171 | orchestrator | changed: [testbed-manager] 2025-09-19 11:45:37.038182 | orchestrator | 2025-09-19 11:45:37.038193 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-19 11:45:37.038204 | orchestrator | Friday 19 September 2025 11:44:09 +0000 (0:00:01.201) 0:00:07.499 ****** 2025-09-19 11:45:37.038224 | orchestrator | changed: [testbed-manager] 2025-09-19 11:45:37.038235 | orchestrator | 2025-09-19 11:45:37.038246 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-19 11:45:37.038257 | orchestrator | Friday 19 September 2025 11:44:11 +0000 (0:00:02.020) 0:00:09.519 ****** 2025-09-19 11:45:37.038267 | orchestrator | changed: [testbed-manager] 2025-09-19 11:45:37.038278 | orchestrator | 2025-09-19 11:45:37.038289 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-19 11:45:37.038300 | orchestrator | Friday 19 September 2025 11:44:12 +0000 (0:00:01.084) 0:00:10.604 ****** 2025-09-19 11:45:37.038311 | orchestrator | changed: [testbed-manager] 2025-09-19 11:45:37.038321 | orchestrator | 2025-09-19 11:45:37.038332 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-19 11:45:37.038343 | orchestrator | Friday 19 September 2025 11:45:10 +0000 (0:00:58.090) 0:01:08.694 ****** 2025-09-19 11:45:37.038354 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:45:37.038463 | orchestrator | 2025-09-19 11:45:37.038480 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 11:45:37.038490 | orchestrator | 2025-09-19 11:45:37.038501 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 11:45:37.038512 | orchestrator | Friday 19 September 2025 11:45:10 +0000 (0:00:00.181) 0:01:08.876 ****** 2025-09-19 11:45:37.038523 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:45:37.038533 | orchestrator | 2025-09-19 11:45:37.038544 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 11:45:37.038555 | orchestrator | 2025-09-19 11:45:37.038566 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 11:45:37.038576 | orchestrator | Friday 19 September 2025 11:45:22 +0000 (0:00:11.688) 0:01:20.564 ****** 2025-09-19 11:45:37.038594 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:45:37.038605 | orchestrator | 2025-09-19 11:45:37.038616 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 11:45:37.038627 | orchestrator | 2025-09-19 11:45:37.038637 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 11:45:37.038648 | orchestrator | Friday 19 September 2025 11:45:23 +0000 (0:00:01.199) 0:01:21.763 ****** 2025-09-19 11:45:37.038659 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:45:37.038670 | orchestrator | 2025-09-19 11:45:37.038690 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:45:37.038702 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:45:37.038713 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:45:37.038724 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:45:37.038735 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:45:37.038746 | orchestrator | 2025-09-19 11:45:37.038756 | orchestrator | 2025-09-19 11:45:37.038767 | orchestrator | 2025-09-19 11:45:37.038778 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:45:37.038788 | orchestrator | Friday 19 September 2025 11:45:34 +0000 (0:00:11.201) 0:01:32.965 ****** 2025-09-19 11:45:37.038799 | orchestrator | =============================================================================== 2025-09-19 11:45:37.038810 | orchestrator | Create admin user ------------------------------------------------------ 58.09s 2025-09-19 11:45:37.038820 | orchestrator | Restart ceph manager service ------------------------------------------- 24.09s 2025-09-19 11:45:37.038831 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.02s 2025-09-19 11:45:37.038850 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.67s 2025-09-19 11:45:37.038861 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.51s 2025-09-19 11:45:37.038871 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.20s 2025-09-19 11:45:37.038882 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.08s 2025-09-19 11:45:37.039007 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.06s 2025-09-19 11:45:37.039019 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.91s 2025-09-19 11:45:37.039030 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.88s 2025-09-19 11:45:37.039041 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2025-09-19 11:45:37.039102 | orchestrator | 2025-09-19 11:45:37 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:37.039118 | orchestrator | 2025-09-19 11:45:37 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:37.039135 | orchestrator | 2025-09-19 11:45:37 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:37.039146 | orchestrator | 2025-09-19 11:45:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:40.081401 | orchestrator | 2025-09-19 11:45:40 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:40.081467 | orchestrator | 2025-09-19 11:45:40 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:40.081798 | orchestrator | 2025-09-19 11:45:40 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:40.082567 | orchestrator | 2025-09-19 11:45:40 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:40.082595 | orchestrator | 2025-09-19 11:45:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:43.107104 | orchestrator | 2025-09-19 11:45:43 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:43.107468 | orchestrator | 2025-09-19 11:45:43 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:43.108300 | orchestrator | 2025-09-19 11:45:43 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:43.108882 | orchestrator | 2025-09-19 11:45:43 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:43.109257 | orchestrator | 2025-09-19 11:45:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:46.144535 | orchestrator | 2025-09-19 11:45:46 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:46.145765 | orchestrator | 2025-09-19 11:45:46 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:46.146934 | orchestrator | 2025-09-19 11:45:46 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:46.148715 | orchestrator | 2025-09-19 11:45:46 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:46.148765 | orchestrator | 2025-09-19 11:45:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:49.196714 | orchestrator | 2025-09-19 11:45:49 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:49.197965 | orchestrator | 2025-09-19 11:45:49 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:49.199826 | orchestrator | 2025-09-19 11:45:49 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:49.202783 | orchestrator | 2025-09-19 11:45:49 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:49.203004 | orchestrator | 2025-09-19 11:45:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:52.230647 | orchestrator | 2025-09-19 11:45:52 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:52.231823 | orchestrator | 2025-09-19 11:45:52 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:52.233207 | orchestrator | 2025-09-19 11:45:52 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:52.235053 | orchestrator | 2025-09-19 11:45:52 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:52.235077 | orchestrator | 2025-09-19 11:45:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:55.285427 | orchestrator | 2025-09-19 11:45:55 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:55.287400 | orchestrator | 2025-09-19 11:45:55 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:55.289294 | orchestrator | 2025-09-19 11:45:55 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:55.290953 | orchestrator | 2025-09-19 11:45:55 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:55.291003 | orchestrator | 2025-09-19 11:45:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:58.340249 | orchestrator | 2025-09-19 11:45:58 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:45:58.340450 | orchestrator | 2025-09-19 11:45:58 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:45:58.342297 | orchestrator | 2025-09-19 11:45:58 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:45:58.343531 | orchestrator | 2025-09-19 11:45:58 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:45:58.343879 | orchestrator | 2025-09-19 11:45:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:01.439822 | orchestrator | 2025-09-19 11:46:01 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:01.440202 | orchestrator | 2025-09-19 11:46:01 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:01.442750 | orchestrator | 2025-09-19 11:46:01 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:46:01.443488 | orchestrator | 2025-09-19 11:46:01 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:01.443602 | orchestrator | 2025-09-19 11:46:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:04.478854 | orchestrator | 2025-09-19 11:46:04 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:04.480272 | orchestrator | 2025-09-19 11:46:04 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:04.483770 | orchestrator | 2025-09-19 11:46:04 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:46:04.485477 | orchestrator | 2025-09-19 11:46:04 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:04.485501 | orchestrator | 2025-09-19 11:46:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:07.534402 | orchestrator | 2025-09-19 11:46:07 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:07.534677 | orchestrator | 2025-09-19 11:46:07 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:07.535709 | orchestrator | 2025-09-19 11:46:07 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:46:07.536388 | orchestrator | 2025-09-19 11:46:07 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:07.536411 | orchestrator | 2025-09-19 11:46:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:10.571614 | orchestrator | 2025-09-19 11:46:10 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:10.572300 | orchestrator | 2025-09-19 11:46:10 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:10.573673 | orchestrator | 2025-09-19 11:46:10 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:46:10.574726 | orchestrator | 2025-09-19 11:46:10 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:10.574752 | orchestrator | 2025-09-19 11:46:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:13.620251 | orchestrator | 2025-09-19 11:46:13 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:13.621389 | orchestrator | 2025-09-19 11:46:13 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:13.623631 | orchestrator | 2025-09-19 11:46:13 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:46:13.623698 | orchestrator | 2025-09-19 11:46:13 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:13.623719 | orchestrator | 2025-09-19 11:46:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:16.680899 | orchestrator | 2025-09-19 11:46:16 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:16.683213 | orchestrator | 2025-09-19 11:46:16 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:16.685490 | orchestrator | 2025-09-19 11:46:16 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:46:16.687461 | orchestrator | 2025-09-19 11:46:16 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:16.687500 | orchestrator | 2025-09-19 11:46:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:19.728709 | orchestrator | 2025-09-19 11:46:19 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:19.728799 | orchestrator | 2025-09-19 11:46:19 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:19.728809 | orchestrator | 2025-09-19 11:46:19 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:46:19.729310 | orchestrator | 2025-09-19 11:46:19 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:19.729337 | orchestrator | 2025-09-19 11:46:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:22.760050 | orchestrator | 2025-09-19 11:46:22 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:22.760367 | orchestrator | 2025-09-19 11:46:22 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:22.760420 | orchestrator | 2025-09-19 11:46:22 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:46:22.760441 | orchestrator | 2025-09-19 11:46:22 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:22.760460 | orchestrator | 2025-09-19 11:46:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:25.791404 | orchestrator | 2025-09-19 11:46:25 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:25.793637 | orchestrator | 2025-09-19 11:46:25 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:25.794260 | orchestrator | 2025-09-19 11:46:25 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:46:25.794810 | orchestrator | 2025-09-19 11:46:25 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:25.794836 | orchestrator | 2025-09-19 11:46:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:28.816908 | orchestrator | 2025-09-19 11:46:28 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:28.817033 | orchestrator | 2025-09-19 11:46:28 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:28.817254 | orchestrator | 2025-09-19 11:46:28 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:46:28.817741 | orchestrator | 2025-09-19 11:46:28 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:28.817785 | orchestrator | 2025-09-19 11:46:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:31.846414 | orchestrator | 2025-09-19 11:46:31 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:31.846570 | orchestrator | 2025-09-19 11:46:31 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:31.847161 | orchestrator | 2025-09-19 11:46:31 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:46:31.847879 | orchestrator | 2025-09-19 11:46:31 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:31.848011 | orchestrator | 2025-09-19 11:46:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:34.877104 | orchestrator | 2025-09-19 11:46:34 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:34.877509 | orchestrator | 2025-09-19 11:46:34 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:34.878493 | orchestrator | 2025-09-19 11:46:34 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:46:34.879011 | orchestrator | 2025-09-19 11:46:34 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:34.879615 | orchestrator | 2025-09-19 11:46:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:37.908882 | orchestrator | 2025-09-19 11:46:37 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:37.909075 | orchestrator | 2025-09-19 11:46:37 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:37.909636 | orchestrator | 2025-09-19 11:46:37 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:46:37.911734 | orchestrator | 2025-09-19 11:46:37 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:37.911760 | orchestrator | 2025-09-19 11:46:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:40.939796 | orchestrator | 2025-09-19 11:46:40 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:40.939871 | orchestrator | 2025-09-19 11:46:40 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:40.939883 | orchestrator | 2025-09-19 11:46:40 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:46:40.939893 | orchestrator | 2025-09-19 11:46:40 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:40.939970 | orchestrator | 2025-09-19 11:46:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:43.968776 | orchestrator | 2025-09-19 11:46:43 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:43.969035 | orchestrator | 2025-09-19 11:46:43 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:43.970488 | orchestrator | 2025-09-19 11:46:43 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:46:43.970539 | orchestrator | 2025-09-19 11:46:43 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:43.970552 | orchestrator | 2025-09-19 11:46:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:47.008881 | orchestrator | 2025-09-19 11:46:47 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:47.011000 | orchestrator | 2025-09-19 11:46:47 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:47.011054 | orchestrator | 2025-09-19 11:46:47 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:46:47.011377 | orchestrator | 2025-09-19 11:46:47 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:47.011398 | orchestrator | 2025-09-19 11:46:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:50.060029 | orchestrator | 2025-09-19 11:46:50 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:50.062815 | orchestrator | 2025-09-19 11:46:50 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:50.065122 | orchestrator | 2025-09-19 11:46:50 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state STARTED 2025-09-19 11:46:50.067364 | orchestrator | 2025-09-19 11:46:50 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:50.067415 | orchestrator | 2025-09-19 11:46:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:53.109107 | orchestrator | 2025-09-19 11:46:53 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:53.110937 | orchestrator | 2025-09-19 11:46:53 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:53.114132 | orchestrator | 2025-09-19 11:46:53 | INFO  | Task a09f90c8-5b5d-4a3a-9d0f-cfdc11ea4926 is in state SUCCESS 2025-09-19 11:46:53.116000 | orchestrator | 2025-09-19 11:46:53.116039 | orchestrator | 2025-09-19 11:46:53.116051 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:46:53.116063 | orchestrator | 2025-09-19 11:46:53.116075 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:46:53.116086 | orchestrator | Friday 19 September 2025 11:44:08 +0000 (0:00:00.396) 0:00:00.396 ****** 2025-09-19 11:46:53.116097 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:46:53.116110 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:46:53.116121 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:46:53.116132 | orchestrator | 2025-09-19 11:46:53.116143 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:46:53.116154 | orchestrator | Friday 19 September 2025 11:44:09 +0000 (0:00:00.371) 0:00:00.767 ****** 2025-09-19 11:46:53.116165 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-19 11:46:53.116176 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-19 11:46:53.116187 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-19 11:46:53.116198 | orchestrator | 2025-09-19 11:46:53.116209 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-19 11:46:53.116220 | orchestrator | 2025-09-19 11:46:53.116277 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 11:46:53.116290 | orchestrator | Friday 19 September 2025 11:44:09 +0000 (0:00:00.598) 0:00:01.366 ****** 2025-09-19 11:46:53.116301 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:46:53.116313 | orchestrator | 2025-09-19 11:46:53.116324 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-19 11:46:53.116334 | orchestrator | Friday 19 September 2025 11:44:10 +0000 (0:00:00.893) 0:00:02.260 ****** 2025-09-19 11:46:53.116345 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-19 11:46:53.116356 | orchestrator | 2025-09-19 11:46:53.116367 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-19 11:46:53.116378 | orchestrator | Friday 19 September 2025 11:44:15 +0000 (0:00:04.802) 0:00:07.063 ****** 2025-09-19 11:46:53.116389 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-19 11:46:53.116400 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-19 11:46:53.116411 | orchestrator | 2025-09-19 11:46:53.116422 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-19 11:46:53.116432 | orchestrator | Friday 19 September 2025 11:44:22 +0000 (0:00:07.091) 0:00:14.155 ****** 2025-09-19 11:46:53.116443 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-19 11:46:53.116454 | orchestrator | 2025-09-19 11:46:53.116465 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-19 11:46:53.116476 | orchestrator | Friday 19 September 2025 11:44:27 +0000 (0:00:04.605) 0:00:18.761 ****** 2025-09-19 11:46:53.116487 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:46:53.116498 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-19 11:46:53.116509 | orchestrator | 2025-09-19 11:46:53.116520 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-19 11:46:53.116530 | orchestrator | Friday 19 September 2025 11:44:31 +0000 (0:00:04.047) 0:00:22.809 ****** 2025-09-19 11:46:53.116542 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:46:53.116556 | orchestrator | 2025-09-19 11:46:53.116568 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-19 11:46:53.116580 | orchestrator | Friday 19 September 2025 11:44:34 +0000 (0:00:02.964) 0:00:25.773 ****** 2025-09-19 11:46:53.116592 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-19 11:46:53.116604 | orchestrator | 2025-09-19 11:46:53.116616 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-19 11:46:53.116628 | orchestrator | Friday 19 September 2025 11:44:38 +0000 (0:00:04.145) 0:00:29.919 ****** 2025-09-19 11:46:53.116678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:46:53.116708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:46:53.116724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:46:53.116738 | orchestrator | 2025-09-19 11:46:53.116750 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 11:46:53.116770 | orchestrator | Friday 19 September 2025 11:44:41 +0000 (0:00:03.244) 0:00:33.164 ****** 2025-09-19 11:46:53.116783 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:46:53.116796 | orchestrator | 2025-09-19 11:46:53.116941 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-19 11:46:53.116971 | orchestrator | Friday 19 September 2025 11:44:42 +0000 (0:00:00.729) 0:00:33.893 ****** 2025-09-19 11:46:53.116984 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:46:53.116995 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:53.117006 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:46:53.117017 | orchestrator | 2025-09-19 11:46:53.117028 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-19 11:46:53.117039 | orchestrator | Friday 19 September 2025 11:44:46 +0000 (0:00:03.793) 0:00:37.687 ****** 2025-09-19 11:46:53.117049 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:46:53.117060 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:46:53.117071 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:46:53.117082 | orchestrator | 2025-09-19 11:46:53.117092 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-19 11:46:53.117103 | orchestrator | Friday 19 September 2025 11:44:48 +0000 (0:00:01.963) 0:00:39.650 ****** 2025-09-19 11:46:53.117114 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:46:53.117124 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:46:53.117135 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:46:53.117145 | orchestrator | 2025-09-19 11:46:53.117156 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-19 11:46:53.117167 | orchestrator | Friday 19 September 2025 11:44:49 +0000 (0:00:01.269) 0:00:40.920 ****** 2025-09-19 11:46:53.117178 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:46:53.117188 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:46:53.117199 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:46:53.117210 | orchestrator | 2025-09-19 11:46:53.117220 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-19 11:46:53.117231 | orchestrator | Friday 19 September 2025 11:44:50 +0000 (0:00:00.829) 0:00:41.749 ****** 2025-09-19 11:46:53.117242 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:53.117252 | orchestrator | 2025-09-19 11:46:53.117263 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-19 11:46:53.117274 | orchestrator | Friday 19 September 2025 11:44:50 +0000 (0:00:00.312) 0:00:42.062 ****** 2025-09-19 11:46:53.117284 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:53.117295 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:53.117306 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:53.117317 | orchestrator | 2025-09-19 11:46:53.117328 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 11:46:53.117338 | orchestrator | Friday 19 September 2025 11:44:50 +0000 (0:00:00.306) 0:00:42.369 ****** 2025-09-19 11:46:53.117349 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:46:53.117359 | orchestrator | 2025-09-19 11:46:53.117370 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-19 11:46:53.117381 | orchestrator | Friday 19 September 2025 11:44:51 +0000 (0:00:00.543) 0:00:42.912 ****** 2025-09-19 11:46:53.117408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:46:53.117431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:46:53.117444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:46:53.117462 | orchestrator | 2025-09-19 11:46:53.117473 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-19 11:46:53.117484 | orchestrator | Friday 19 September 2025 11:44:57 +0000 (0:00:05.684) 0:00:48.596 ****** 2025-09-19 11:46:53.117509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:46:53.117550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:46:53.117570 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:53.117581 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:53.117607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:46:53.117620 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:53.117631 | orchestrator | 2025-09-19 11:46:53.117642 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-19 11:46:53.117653 | orchestrator | Friday 19 September 2025 11:45:00 +0000 (0:00:03.015) 0:00:51.612 ****** 2025-09-19 11:46:53.117665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:46:53.117697 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:53.117722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:46:53.117734 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:53.117746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:46:53.117765 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:53.117776 | orchestrator | 2025-09-19 11:46:53.117787 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-19 11:46:53.117798 | orchestrator | Friday 19 September 2025 11:45:03 +0000 (0:00:03.205) 0:00:54.817 ****** 2025-09-19 11:46:53.117808 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:53.117819 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:53.117830 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:53.117841 | orchestrator | 2025-09-19 11:46:53.117852 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-19 11:46:53.117863 | orchestrator | Friday 19 September 2025 11:45:07 +0000 (0:00:03.903) 0:00:58.721 ****** 2025-09-19 11:46:53.117884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:46:53.117919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:46:53.117940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:46:53.117952 | orchestrator | 2025-09-19 11:46:53.117963 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-19 11:46:53.117974 | orchestrator | Friday 19 September 2025 11:45:12 +0000 (0:00:04.829) 0:01:03.550 ****** 2025-09-19 11:46:53.117985 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:53.117996 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:46:53.118011 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:46:53.118093 | orchestrator | 2025-09-19 11:46:53.118105 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-19 11:46:53.118116 | orchestrator | Friday 19 September 2025 11:45:19 +0000 (0:00:07.491) 0:01:11.041 ****** 2025-09-19 11:46:53.118127 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:53.118138 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:53.118149 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:53.118160 | orchestrator | 2025-09-19 11:46:53.118171 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-19 11:46:53.118189 | orchestrator | Friday 19 September 2025 11:45:26 +0000 (0:00:06.650) 0:01:17.692 ****** 2025-09-19 11:46:53.118201 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:53.118212 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:53.118222 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:53.118233 | orchestrator | 2025-09-19 11:46:53.118244 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-19 11:46:53.118255 | orchestrator | Friday 19 September 2025 11:45:30 +0000 (0:00:03.787) 0:01:21.480 ****** 2025-09-19 11:46:53.118266 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:53.118277 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:53.118288 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:53.118298 | orchestrator | 2025-09-19 11:46:53.118309 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-19 11:46:53.118320 | orchestrator | Friday 19 September 2025 11:45:34 +0000 (0:00:04.426) 0:01:25.907 ****** 2025-09-19 11:46:53.118331 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:53.118342 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:53.118353 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:53.118371 | orchestrator | 2025-09-19 11:46:53.118382 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-19 11:46:53.118393 | orchestrator | Friday 19 September 2025 11:45:38 +0000 (0:00:04.246) 0:01:30.153 ****** 2025-09-19 11:46:53.118404 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:53.118414 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:53.118425 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:53.118435 | orchestrator | 2025-09-19 11:46:53.118446 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-19 11:46:53.118457 | orchestrator | Friday 19 September 2025 11:45:38 +0000 (0:00:00.296) 0:01:30.449 ****** 2025-09-19 11:46:53.118468 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 11:46:53.118479 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:53.118490 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 11:46:53.118501 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:53.118511 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 11:46:53.118522 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:53.118533 | orchestrator | 2025-09-19 11:46:53.118544 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-19 11:46:53.118554 | orchestrator | Friday 19 September 2025 11:45:41 +0000 (0:00:02.919) 0:01:33.369 ****** 2025-09-19 11:46:53.118566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:46:53.118597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:46:53.118617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:46:53.118630 | orchestrator | 2025-09-19 11:46:53.118641 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 11:46:53.118652 | orchestrator | Friday 19 September 2025 11:45:45 +0000 (0:00:03.775) 0:01:37.144 ****** 2025-09-19 11:46:53.118662 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:53.118673 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:53.118684 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:53.118694 | orchestrator | 2025-09-19 11:46:53.118705 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-19 11:46:53.118715 | orchestrator | Friday 19 September 2025 11:45:45 +0000 (0:00:00.279) 0:01:37.424 ****** 2025-09-19 11:46:53.118726 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:53.118737 | orchestrator | 2025-09-19 11:46:53.118747 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-19 11:46:53.118758 | orchestrator | Friday 19 September 2025 11:45:47 +0000 (0:00:01.987) 0:01:39.412 ****** 2025-09-19 11:46:53.118769 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:53.118779 | orchestrator | 2025-09-19 11:46:53.118790 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-19 11:46:53.118801 | orchestrator | Friday 19 September 2025 11:45:49 +0000 (0:00:01.911) 0:01:41.323 ****** 2025-09-19 11:46:53.118812 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:53.118830 | orchestrator | 2025-09-19 11:46:53.118846 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-19 11:46:53.118857 | orchestrator | Friday 19 September 2025 11:45:51 +0000 (0:00:01.809) 0:01:43.132 ****** 2025-09-19 11:46:53.118868 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:53.118878 | orchestrator | 2025-09-19 11:46:53.118889 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-19 11:46:53.118953 | orchestrator | Friday 19 September 2025 11:46:16 +0000 (0:00:24.467) 0:02:07.600 ****** 2025-09-19 11:46:53.118965 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:53.118976 | orchestrator | 2025-09-19 11:46:53.118994 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 11:46:53.119005 | orchestrator | Friday 19 September 2025 11:46:18 +0000 (0:00:02.016) 0:02:09.616 ****** 2025-09-19 11:46:53.119016 | orchestrator | 2025-09-19 11:46:53.119027 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 11:46:53.119038 | orchestrator | Friday 19 September 2025 11:46:18 +0000 (0:00:00.062) 0:02:09.679 ****** 2025-09-19 11:46:53.119048 | orchestrator | 2025-09-19 11:46:53.119059 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 11:46:53.119070 | orchestrator | Friday 19 September 2025 11:46:18 +0000 (0:00:00.066) 0:02:09.745 ****** 2025-09-19 11:46:53.119081 | orchestrator | 2025-09-19 11:46:53.119092 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-19 11:46:53.119102 | orchestrator | Friday 19 September 2025 11:46:18 +0000 (0:00:00.067) 0:02:09.812 ****** 2025-09-19 11:46:53.119113 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:53.119124 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:46:53.119135 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:46:53.119146 | orchestrator | 2025-09-19 11:46:53.119156 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:46:53.119168 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 11:46:53.119181 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 11:46:53.119192 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 11:46:53.119202 | orchestrator | 2025-09-19 11:46:53.119213 | orchestrator | 2025-09-19 11:46:53.119224 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:46:53.119235 | orchestrator | Friday 19 September 2025 11:46:51 +0000 (0:00:32.644) 0:02:42.457 ****** 2025-09-19 11:46:53.119246 | orchestrator | =============================================================================== 2025-09-19 11:46:53.119257 | orchestrator | glance : Restart glance-api container ---------------------------------- 32.64s 2025-09-19 11:46:53.119267 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 24.47s 2025-09-19 11:46:53.119278 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.49s 2025-09-19 11:46:53.119288 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.09s 2025-09-19 11:46:53.119299 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.65s 2025-09-19 11:46:53.119310 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.68s 2025-09-19 11:46:53.119321 | orchestrator | glance : Copying over config.json files for services -------------------- 4.83s 2025-09-19 11:46:53.119331 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.80s 2025-09-19 11:46:53.119342 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 4.61s 2025-09-19 11:46:53.119353 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.43s 2025-09-19 11:46:53.119364 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.25s 2025-09-19 11:46:53.119381 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.15s 2025-09-19 11:46:53.119392 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.05s 2025-09-19 11:46:53.119403 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.90s 2025-09-19 11:46:53.119413 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.79s 2025-09-19 11:46:53.119424 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.79s 2025-09-19 11:46:53.119435 | orchestrator | glance : Check glance containers ---------------------------------------- 3.78s 2025-09-19 11:46:53.119445 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.24s 2025-09-19 11:46:53.119456 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.21s 2025-09-19 11:46:53.119466 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.02s 2025-09-19 11:46:53.119477 | orchestrator | 2025-09-19 11:46:53 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:53.119487 | orchestrator | 2025-09-19 11:46:53 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:46:53.119497 | orchestrator | 2025-09-19 11:46:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:56.161032 | orchestrator | 2025-09-19 11:46:56 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:56.161152 | orchestrator | 2025-09-19 11:46:56 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:56.163617 | orchestrator | 2025-09-19 11:46:56 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:56.168573 | orchestrator | 2025-09-19 11:46:56 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:46:56.169050 | orchestrator | 2025-09-19 11:46:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:59.220282 | orchestrator | 2025-09-19 11:46:59 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:46:59.222120 | orchestrator | 2025-09-19 11:46:59 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:46:59.224818 | orchestrator | 2025-09-19 11:46:59 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:46:59.227607 | orchestrator | 2025-09-19 11:46:59 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:46:59.227818 | orchestrator | 2025-09-19 11:46:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:02.263597 | orchestrator | 2025-09-19 11:47:02 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:47:02.265168 | orchestrator | 2025-09-19 11:47:02 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:47:02.266486 | orchestrator | 2025-09-19 11:47:02 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:02.268031 | orchestrator | 2025-09-19 11:47:02 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:02.268165 | orchestrator | 2025-09-19 11:47:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:05.326050 | orchestrator | 2025-09-19 11:47:05 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:47:05.327724 | orchestrator | 2025-09-19 11:47:05 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:47:05.329611 | orchestrator | 2025-09-19 11:47:05 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:05.331819 | orchestrator | 2025-09-19 11:47:05 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:05.331882 | orchestrator | 2025-09-19 11:47:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:08.374148 | orchestrator | 2025-09-19 11:47:08 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:47:08.374973 | orchestrator | 2025-09-19 11:47:08 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:47:08.377939 | orchestrator | 2025-09-19 11:47:08 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:08.378617 | orchestrator | 2025-09-19 11:47:08 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:08.378672 | orchestrator | 2025-09-19 11:47:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:11.422942 | orchestrator | 2025-09-19 11:47:11 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:47:11.424616 | orchestrator | 2025-09-19 11:47:11 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:47:11.427123 | orchestrator | 2025-09-19 11:47:11 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:11.428934 | orchestrator | 2025-09-19 11:47:11 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:11.428959 | orchestrator | 2025-09-19 11:47:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:14.486340 | orchestrator | 2025-09-19 11:47:14 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:47:14.488761 | orchestrator | 2025-09-19 11:47:14 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:47:14.488803 | orchestrator | 2025-09-19 11:47:14 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:14.490517 | orchestrator | 2025-09-19 11:47:14 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:14.490543 | orchestrator | 2025-09-19 11:47:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:17.538442 | orchestrator | 2025-09-19 11:47:17 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state STARTED 2025-09-19 11:47:17.540215 | orchestrator | 2025-09-19 11:47:17 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:47:17.543113 | orchestrator | 2025-09-19 11:47:17 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:17.545046 | orchestrator | 2025-09-19 11:47:17 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:17.545081 | orchestrator | 2025-09-19 11:47:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:20.594650 | orchestrator | 2025-09-19 11:47:20 | INFO  | Task d2e17c8d-b877-47bc-be1d-ce845e71ed0b is in state SUCCESS 2025-09-19 11:47:20.596557 | orchestrator | 2025-09-19 11:47:20.596599 | orchestrator | 2025-09-19 11:47:20.596668 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:47:20.596680 | orchestrator | 2025-09-19 11:47:20.596690 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:47:20.596701 | orchestrator | Friday 19 September 2025 11:44:01 +0000 (0:00:00.252) 0:00:00.252 ****** 2025-09-19 11:47:20.596711 | orchestrator | ok: [testbed-manager] 2025-09-19 11:47:20.596723 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:47:20.596733 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:47:20.596768 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:47:20.596778 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:47:20.596788 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:47:20.596797 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:47:20.596829 | orchestrator | 2025-09-19 11:47:20.596916 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:47:20.596930 | orchestrator | Friday 19 September 2025 11:44:02 +0000 (0:00:00.759) 0:00:01.011 ****** 2025-09-19 11:47:20.596940 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-19 11:47:20.596950 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-19 11:47:20.596960 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-19 11:47:20.596999 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-19 11:47:20.597011 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-19 11:47:20.597021 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-19 11:47:20.597031 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-19 11:47:20.597040 | orchestrator | 2025-09-19 11:47:20.597050 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-19 11:47:20.597060 | orchestrator | 2025-09-19 11:47:20.597070 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-19 11:47:20.597079 | orchestrator | Friday 19 September 2025 11:44:03 +0000 (0:00:00.687) 0:00:01.699 ****** 2025-09-19 11:47:20.597091 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:47:20.597102 | orchestrator | 2025-09-19 11:47:20.597112 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-19 11:47:20.597122 | orchestrator | Friday 19 September 2025 11:44:04 +0000 (0:00:01.627) 0:00:03.326 ****** 2025-09-19 11:47:20.597134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.597150 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 11:47:20.597162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.597189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.597250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.597264 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.597275 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.597285 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.597296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.597308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.597320 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.597337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.597363 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.597415 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.597427 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.597439 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 11:47:20.597454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.597466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.597567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.597585 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.597596 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.597606 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.597617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.597627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.597637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.597647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.597667 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.597684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.597694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.597704 | orchestrator | 2025-09-19 11:47:20.597714 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-19 11:47:20.597724 | orchestrator | Friday 19 September 2025 11:44:08 +0000 (0:00:03.456) 0:00:06.782 ****** 2025-09-19 11:47:20.597735 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:47:20.597744 | orchestrator | 2025-09-19 11:47:20.597754 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-19 11:47:20.597764 | orchestrator | Friday 19 September 2025 11:44:10 +0000 (0:00:01.758) 0:00:08.541 ****** 2025-09-19 11:47:20.597774 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 11:47:20.597784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.597794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.597817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.597833 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.597979 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.598140 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.598159 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.598170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.598180 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.598199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.598223 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.598244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.598255 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.598266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.598277 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 11:47:20.598289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.598307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.598338 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.598356 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.598366 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.598377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.598387 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.598397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.598408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.598424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.598439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.599218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.599313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.599329 | orchestrator | 2025-09-19 11:47:20.599342 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-19 11:47:20.599355 | orchestrator | Friday 19 September 2025 11:44:16 +0000 (0:00:06.439) 0:00:14.981 ****** 2025-09-19 11:47:20.599368 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 11:47:20.599381 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:47:20.599418 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.599446 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 11:47:20.599479 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.599493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:47:20.599505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.599517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.599529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.599548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.599560 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:47:20.599573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:47:20.599591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.599611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.599623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.599636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.599648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:47:20.599666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.599678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.599690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.599701 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:20.599719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.599731 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:20.599753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:47:20.599768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.599781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.599801 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:20.599814 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:20.599827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:47:20.599865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.599881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.599894 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:20.599907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:47:20.599926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.599949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.599962 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:20.599975 | orchestrator | 2025-09-19 11:47:20.599988 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-19 11:47:20.600001 | orchestrator | Friday 19 September 2025 11:44:17 +0000 (0:00:01.343) 0:00:16.324 ****** 2025-09-19 11:47:20.600015 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 11:47:20.600041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:47:20.600056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.600070 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:47:20.600083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.600102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.600123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.600136 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.600155 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 11:47:20.600169 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.600180 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:20.600193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:47:20.600206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.600223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.600243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.600264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.600275 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:47:20.600287 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:20.600298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:47:20.600310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.600322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.600333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.600345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:47:20.600357 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:20.600380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:47:20.600393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.600418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.600431 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:20.600443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:47:20.600455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.600467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.600479 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:20.600491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:47:20.600507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.600527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:47:20.600547 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:20.600559 | orchestrator | 2025-09-19 11:47:20.600571 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-19 11:47:20.600583 | orchestrator | Friday 19 September 2025 11:44:19 +0000 (0:00:01.721) 0:00:18.045 ****** 2025-09-19 11:47:20.600595 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 11:47:20.600607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.600619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.600631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.600643 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.600660 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.600688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.600701 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.600713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.600726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.600738 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.600750 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.600763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.600779 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.600807 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.600820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.600832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.600866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.600879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.600890 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 11:47:20.600908 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.600935 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.600947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.600959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.600971 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.600982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.600994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.601006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.601029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.601041 | orchestrator | 2025-09-19 11:47:20.601054 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-19 11:47:20.601065 | orchestrator | Friday 19 September 2025 11:44:25 +0000 (0:00:06.084) 0:00:24.130 ****** 2025-09-19 11:47:20.601077 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:47:20.601088 | orchestrator | 2025-09-19 11:47:20.601100 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-19 11:47:20.601116 | orchestrator | Friday 19 September 2025 11:44:26 +0000 (0:00:00.993) 0:00:25.123 ****** 2025-09-19 11:47:20.601128 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1071429, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7968326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601140 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1071429, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7968326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601153 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1071429, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7968326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601166 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1071472, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8012512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601178 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1071429, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7968326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601197 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1071418, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7949693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601219 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1071472, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8012512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601231 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1071472, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8012512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601243 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1071472, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8012512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601255 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1071429, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7968326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.601266 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1071450, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7996173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601377 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1071418, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7949693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601401 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1071429, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7968326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601428 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1071450, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7996173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601441 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1071412, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7928092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601453 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1071429, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7968326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601464 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1071418, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7949693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601476 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1071418, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7949693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601498 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1071472, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8012512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601510 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1071430, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7971685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601533 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1071450, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7996173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601545 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1071412, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7928092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601557 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1071472, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8012512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601569 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1071418, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7949693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601581 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1071447, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7988727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601598 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1071450, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7996173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601610 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1071418, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7949693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.601626 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1071430, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7971685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602187 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1071472, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8012512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.602219 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1071450, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7996173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602230 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1071450, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7996173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602242 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1071433, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7973902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602266 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1071412, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7928092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602278 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1071412, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7928092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602299 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1071412, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7928092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602320 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1071447, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7988727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602333 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1071412, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7928092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602344 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1071426, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7963526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602356 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1071430, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7971685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602374 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1071430, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7971685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602386 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1071430, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7971685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602402 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1071433, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7973902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602420 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1071430, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7971685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602432 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071468, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8008153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602443 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1071418, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7949693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.602461 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1071447, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7988727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602471 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071406, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7920365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602482 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1071447, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7988727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602497 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1071447, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7988727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602513 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1071426, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7963526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602524 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1071493, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8028712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602534 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1071433, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7973902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602550 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1071447, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7988727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602560 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1071433, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7973902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602570 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071468, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8008153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602585 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1071465, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8003454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602600 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1071433, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7973902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602611 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1071426, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7963526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602621 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071415, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7931485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602638 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1071450, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7996173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.602649 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071468, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8008153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602660 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1071426, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7963526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602674 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1071433, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7973902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602691 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1071426, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7963526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602702 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1071410, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.792336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602719 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071406, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7920365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602729 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071406, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7920365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602739 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1071426, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7963526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602749 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071468, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8008153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602767 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1071446, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.79824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602785 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071468, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8008153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602797 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1071493, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8028712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602814 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071468, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8008153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602826 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1071465, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8003454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602856 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1071440, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.79824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602869 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1071493, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8028712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602885 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071406, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7920365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602903 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1071487, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8021455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602915 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:20.602927 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071406, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7920365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602946 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1071412, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7928092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.602957 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071406, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7920365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602968 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1071465, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8003454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.602980 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071415, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7931485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603008 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1071493, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8028712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603027 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1071493, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8028712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603047 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1071493, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8028712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603057 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1071410, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.792336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603067 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071415, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7931485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603077 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1071465, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8003454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603088 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1071465, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8003454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603102 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1071446, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.79824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603119 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1071465, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8003454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603136 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071415, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7931485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603147 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1071410, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.792336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603157 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1071440, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.79824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603168 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071415, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7931485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603178 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1071410, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.792336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603193 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1071487, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8021455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603203 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:20.603219 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1071410, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.792336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603235 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1071446, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.79824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603246 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071415, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7931485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603256 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1071446, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.79824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603266 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1071446, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.79824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603277 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1071440, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.79824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603292 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1071430, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7971685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.603313 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1071410, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.792336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603324 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1071487, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8021455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603334 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:20.603344 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1071440, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.79824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603354 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1071446, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.79824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603364 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1071440, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.79824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603374 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1071487, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8021455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603385 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:20.603399 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1071440, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.79824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603421 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1071487, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8021455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603432 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:20.603442 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1071487, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8021455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:47:20.603452 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:20.603462 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1071447, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7988727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.603472 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1071433, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7973902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.603482 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1071426, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7963526, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.603493 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071468, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8008153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.603518 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071406, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7920365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.603534 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1071493, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8028712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.603545 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1071465, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8003454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.603555 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1071415, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7931485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.603565 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1071410, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.792336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.603575 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1071446, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.79824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.603586 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1071440, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.79824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.603607 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1071487, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.8021455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:47:20.603617 | orchestrator | 2025-09-19 11:47:20.603628 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-19 11:47:20.603638 | orchestrator | Friday 19 September 2025 11:44:52 +0000 (0:00:25.838) 0:00:50.962 ****** 2025-09-19 11:47:20.603648 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:47:20.603659 | orchestrator | 2025-09-19 11:47:20.603674 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-19 11:47:20.603684 | orchestrator | Friday 19 September 2025 11:44:53 +0000 (0:00:00.868) 0:00:51.831 ****** 2025-09-19 11:47:20.603695 | orchestrator | [WARNING]: Skipped 2025-09-19 11:47:20.603705 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:47:20.603715 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-19 11:47:20.603725 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:47:20.603735 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-19 11:47:20.603745 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 11:47:20.603755 | orchestrator | [WARNING]: Skipped 2025-09-19 11:47:20.603764 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:47:20.603775 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-19 11:47:20.603785 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:47:20.603794 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-19 11:47:20.603805 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:47:20.603814 | orchestrator | [WARNING]: Skipped 2025-09-19 11:47:20.603824 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:47:20.603834 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-19 11:47:20.603863 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:47:20.603873 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-19 11:47:20.603883 | orchestrator | [WARNING]: Skipped 2025-09-19 11:47:20.603893 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:47:20.603902 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-19 11:47:20.603912 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:47:20.603922 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-19 11:47:20.603932 | orchestrator | [WARNING]: Skipped 2025-09-19 11:47:20.603941 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:47:20.603951 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-19 11:47:20.603960 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:47:20.603970 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-19 11:47:20.603980 | orchestrator | [WARNING]: Skipped 2025-09-19 11:47:20.603989 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:47:20.603999 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-19 11:47:20.604017 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:47:20.604026 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-19 11:47:20.604036 | orchestrator | [WARNING]: Skipped 2025-09-19 11:47:20.604046 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:47:20.604056 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-19 11:47:20.604066 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:47:20.604076 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-19 11:47:20.604085 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:47:20.604095 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 11:47:20.604105 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 11:47:20.604115 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 11:47:20.604124 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 11:47:20.604134 | orchestrator | 2025-09-19 11:47:20.604144 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-19 11:47:20.604154 | orchestrator | Friday 19 September 2025 11:44:55 +0000 (0:00:02.580) 0:00:54.411 ****** 2025-09-19 11:47:20.604163 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 11:47:20.604173 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:20.604183 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 11:47:20.604192 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:20.604202 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 11:47:20.604212 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:20.604221 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 11:47:20.604231 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:20.604241 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 11:47:20.604251 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:20.604261 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 11:47:20.604275 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:20.604285 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-19 11:47:20.604295 | orchestrator | 2025-09-19 11:47:20.604304 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-19 11:47:20.604315 | orchestrator | Friday 19 September 2025 11:45:14 +0000 (0:00:18.201) 0:01:12.613 ****** 2025-09-19 11:47:20.604325 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 11:47:20.604341 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:20.604352 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 11:47:20.604361 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:20.604371 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 11:47:20.604380 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:20.604390 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 11:47:20.604400 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:20.604410 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 11:47:20.604419 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:20.604429 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 11:47:20.604439 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:20.604449 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-19 11:47:20.604465 | orchestrator | 2025-09-19 11:47:20.604475 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-19 11:47:20.604485 | orchestrator | Friday 19 September 2025 11:45:18 +0000 (0:00:04.155) 0:01:16.769 ****** 2025-09-19 11:47:20.604495 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 11:47:20.604505 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 11:47:20.604515 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 11:47:20.604525 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:20.604535 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:20.604545 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:20.604554 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 11:47:20.604564 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:20.604574 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-19 11:47:20.604583 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 11:47:20.604593 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:20.604603 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 11:47:20.604613 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:20.604623 | orchestrator | 2025-09-19 11:47:20.604633 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-19 11:47:20.604642 | orchestrator | Friday 19 September 2025 11:45:21 +0000 (0:00:02.871) 0:01:19.640 ****** 2025-09-19 11:47:20.604652 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:47:20.604661 | orchestrator | 2025-09-19 11:47:20.604671 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-19 11:47:20.604681 | orchestrator | Friday 19 September 2025 11:45:22 +0000 (0:00:01.615) 0:01:21.256 ****** 2025-09-19 11:47:20.604690 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:47:20.604700 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:20.604710 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:20.604719 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:20.604729 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:20.604739 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:20.604748 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:20.604758 | orchestrator | 2025-09-19 11:47:20.604768 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-19 11:47:20.604777 | orchestrator | Friday 19 September 2025 11:45:23 +0000 (0:00:00.798) 0:01:22.054 ****** 2025-09-19 11:47:20.604787 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:47:20.604797 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:20.604806 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:20.604816 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:20.604825 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:47:20.604835 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:47:20.604866 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:47:20.604876 | orchestrator | 2025-09-19 11:47:20.604886 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-19 11:47:20.604896 | orchestrator | Friday 19 September 2025 11:45:27 +0000 (0:00:03.723) 0:01:25.778 ****** 2025-09-19 11:47:20.604905 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 11:47:20.604923 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 11:47:20.604938 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:47:20.604948 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:20.604957 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 11:47:20.604967 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 11:47:20.604977 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:20.604987 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:20.605048 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 11:47:20.605066 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:20.605083 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 11:47:20.605097 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:20.605111 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 11:47:20.605125 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:20.605140 | orchestrator | 2025-09-19 11:47:20.605154 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-19 11:47:20.605168 | orchestrator | Friday 19 September 2025 11:45:29 +0000 (0:00:02.527) 0:01:28.305 ****** 2025-09-19 11:47:20.605182 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 11:47:20.605197 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:20.605210 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 11:47:20.605224 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 11:47:20.605238 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:20.605252 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:20.605267 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 11:47:20.605281 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:20.605298 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-19 11:47:20.605314 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 11:47:20.605330 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:20.605346 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 11:47:20.605362 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:20.605378 | orchestrator | 2025-09-19 11:47:20.605388 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-19 11:47:20.605398 | orchestrator | Friday 19 September 2025 11:45:31 +0000 (0:00:01.945) 0:01:30.251 ****** 2025-09-19 11:47:20.605407 | orchestrator | [WARNING]: Skipped 2025-09-19 11:47:20.605418 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-19 11:47:20.605427 | orchestrator | due to this access issue: 2025-09-19 11:47:20.605437 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-19 11:47:20.605447 | orchestrator | not a directory 2025-09-19 11:47:20.605456 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:47:20.605466 | orchestrator | 2025-09-19 11:47:20.605476 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-19 11:47:20.605485 | orchestrator | Friday 19 September 2025 11:45:32 +0000 (0:00:01.146) 0:01:31.397 ****** 2025-09-19 11:47:20.605494 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:47:20.605505 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:20.605514 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:20.605540 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:20.605557 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:20.605572 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:20.605588 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:20.605605 | orchestrator | 2025-09-19 11:47:20.605622 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-19 11:47:20.605637 | orchestrator | Friday 19 September 2025 11:45:33 +0000 (0:00:01.016) 0:01:32.414 ****** 2025-09-19 11:47:20.605652 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:47:20.605669 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:20.605685 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:20.605701 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:20.605715 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:20.605731 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:20.605749 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:20.605767 | orchestrator | 2025-09-19 11:47:20.605783 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-19 11:47:20.605796 | orchestrator | Friday 19 September 2025 11:45:34 +0000 (0:00:00.738) 0:01:33.152 ****** 2025-09-19 11:47:20.605816 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 11:47:20.605874 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.605887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.605898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.605908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.605927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.605937 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.605947 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:47:20.605962 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.605978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.605990 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.606000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.606010 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.606063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.606074 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.606084 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.606107 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 11:47:20.606121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.606131 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.606148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.606158 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.606168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.606179 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.606199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.606215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.606226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:47:20.606236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.606253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.606264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:47:20.606274 | orchestrator | 2025-09-19 11:47:20.606284 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-19 11:47:20.606293 | orchestrator | Friday 19 September 2025 11:45:40 +0000 (0:00:05.448) 0:01:38.601 ****** 2025-09-19 11:47:20.606304 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 11:47:20.606313 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:47:20.606323 | orchestrator | 2025-09-19 11:47:20.606333 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 11:47:20.606343 | orchestrator | Friday 19 September 2025 11:45:41 +0000 (0:00:01.065) 0:01:39.666 ****** 2025-09-19 11:47:20.606352 | orchestrator | 2025-09-19 11:47:20.606362 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 11:47:20.606372 | orchestrator | Friday 19 September 2025 11:45:41 +0000 (0:00:00.084) 0:01:39.751 ****** 2025-09-19 11:47:20.606381 | orchestrator | 2025-09-19 11:47:20.606391 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 11:47:20.606400 | orchestrator | Friday 19 September 2025 11:45:41 +0000 (0:00:00.077) 0:01:39.828 ****** 2025-09-19 11:47:20.606410 | orchestrator | 2025-09-19 11:47:20.606420 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 11:47:20.606430 | orchestrator | Friday 19 September 2025 11:45:41 +0000 (0:00:00.065) 0:01:39.894 ****** 2025-09-19 11:47:20.606439 | orchestrator | 2025-09-19 11:47:20.606449 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 11:47:20.606458 | orchestrator | Friday 19 September 2025 11:45:41 +0000 (0:00:00.182) 0:01:40.077 ****** 2025-09-19 11:47:20.606468 | orchestrator | 2025-09-19 11:47:20.606478 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 11:47:20.606487 | orchestrator | Friday 19 September 2025 11:45:41 +0000 (0:00:00.063) 0:01:40.140 ****** 2025-09-19 11:47:20.606497 | orchestrator | 2025-09-19 11:47:20.606507 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 11:47:20.606517 | orchestrator | Friday 19 September 2025 11:45:41 +0000 (0:00:00.062) 0:01:40.202 ****** 2025-09-19 11:47:20.606526 | orchestrator | 2025-09-19 11:47:20.606536 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-19 11:47:20.606546 | orchestrator | Friday 19 September 2025 11:45:41 +0000 (0:00:00.081) 0:01:40.283 ****** 2025-09-19 11:47:20.606555 | orchestrator | changed: [testbed-manager] 2025-09-19 11:47:20.606565 | orchestrator | 2025-09-19 11:47:20.606579 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-19 11:47:20.606604 | orchestrator | Friday 19 September 2025 11:46:05 +0000 (0:00:23.587) 0:02:03.871 ****** 2025-09-19 11:47:20.606632 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:47:20.606651 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:47:20.606667 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:47:20.606682 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:47:20.606692 | orchestrator | changed: [testbed-manager] 2025-09-19 11:47:20.606701 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:47:20.606711 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:47:20.606721 | orchestrator | 2025-09-19 11:47:20.606731 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-19 11:47:20.606741 | orchestrator | Friday 19 September 2025 11:46:18 +0000 (0:00:12.944) 0:02:16.815 ****** 2025-09-19 11:47:20.606751 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:47:20.606761 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:47:20.606771 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:47:20.606781 | orchestrator | 2025-09-19 11:47:20.606790 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-19 11:47:20.606801 | orchestrator | Friday 19 September 2025 11:46:24 +0000 (0:00:06.082) 0:02:22.898 ****** 2025-09-19 11:47:20.606810 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:47:20.606820 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:47:20.606830 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:47:20.606859 | orchestrator | 2025-09-19 11:47:20.606870 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-19 11:47:20.606879 | orchestrator | Friday 19 September 2025 11:46:34 +0000 (0:00:10.530) 0:02:33.429 ****** 2025-09-19 11:47:20.606889 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:47:20.606900 | orchestrator | changed: [testbed-manager] 2025-09-19 11:47:20.606910 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:47:20.606919 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:47:20.606929 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:47:20.606938 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:47:20.606948 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:47:20.606957 | orchestrator | 2025-09-19 11:47:20.606967 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-19 11:47:20.606976 | orchestrator | Friday 19 September 2025 11:46:45 +0000 (0:00:10.637) 0:02:44.066 ****** 2025-09-19 11:47:20.606986 | orchestrator | changed: [testbed-manager] 2025-09-19 11:47:20.606996 | orchestrator | 2025-09-19 11:47:20.607005 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-19 11:47:20.607015 | orchestrator | Friday 19 September 2025 11:46:53 +0000 (0:00:07.995) 0:02:52.061 ****** 2025-09-19 11:47:20.607025 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:47:20.607035 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:47:20.607045 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:47:20.607054 | orchestrator | 2025-09-19 11:47:20.607064 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-19 11:47:20.607073 | orchestrator | Friday 19 September 2025 11:47:02 +0000 (0:00:09.242) 0:03:01.303 ****** 2025-09-19 11:47:20.607083 | orchestrator | changed: [testbed-manager] 2025-09-19 11:47:20.607093 | orchestrator | 2025-09-19 11:47:20.607102 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-19 11:47:20.607112 | orchestrator | Friday 19 September 2025 11:47:07 +0000 (0:00:04.994) 0:03:06.298 ****** 2025-09-19 11:47:20.607123 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:47:20.607132 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:47:20.607142 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:47:20.607152 | orchestrator | 2025-09-19 11:47:20.607161 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:47:20.607172 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 11:47:20.607183 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 11:47:20.607232 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 11:47:20.607243 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 11:47:20.607253 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 11:47:20.607263 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 11:47:20.607273 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 11:47:20.607282 | orchestrator | 2025-09-19 11:47:20.607292 | orchestrator | 2025-09-19 11:47:20.607302 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:47:20.607312 | orchestrator | Friday 19 September 2025 11:47:18 +0000 (0:00:10.991) 0:03:17.289 ****** 2025-09-19 11:47:20.607321 | orchestrator | =============================================================================== 2025-09-19 11:47:20.607335 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.84s 2025-09-19 11:47:20.607345 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 23.59s 2025-09-19 11:47:20.607355 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.20s 2025-09-19 11:47:20.607364 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.94s 2025-09-19 11:47:20.607374 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.99s 2025-09-19 11:47:20.607391 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 10.64s 2025-09-19 11:47:20.607401 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.53s 2025-09-19 11:47:20.607410 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.24s 2025-09-19 11:47:20.607420 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.00s 2025-09-19 11:47:20.607430 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.44s 2025-09-19 11:47:20.607439 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.08s 2025-09-19 11:47:20.607449 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 6.08s 2025-09-19 11:47:20.607458 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.45s 2025-09-19 11:47:20.607468 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.99s 2025-09-19 11:47:20.607477 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.16s 2025-09-19 11:47:20.607487 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.72s 2025-09-19 11:47:20.607496 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.46s 2025-09-19 11:47:20.607506 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.87s 2025-09-19 11:47:20.607516 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.58s 2025-09-19 11:47:20.607525 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.53s 2025-09-19 11:47:20.607535 | orchestrator | 2025-09-19 11:47:20 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:47:20.607544 | orchestrator | 2025-09-19 11:47:20 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:20.607554 | orchestrator | 2025-09-19 11:47:20 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:20.607564 | orchestrator | 2025-09-19 11:47:20 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:47:20.607580 | orchestrator | 2025-09-19 11:47:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:23.655030 | orchestrator | 2025-09-19 11:47:23 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:47:23.657480 | orchestrator | 2025-09-19 11:47:23 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:23.658099 | orchestrator | 2025-09-19 11:47:23 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:23.658767 | orchestrator | 2025-09-19 11:47:23 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:47:23.658891 | orchestrator | 2025-09-19 11:47:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:26.693368 | orchestrator | 2025-09-19 11:47:26 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:47:26.693455 | orchestrator | 2025-09-19 11:47:26 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:26.695334 | orchestrator | 2025-09-19 11:47:26 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:26.696878 | orchestrator | 2025-09-19 11:47:26 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:47:26.696907 | orchestrator | 2025-09-19 11:47:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:29.742314 | orchestrator | 2025-09-19 11:47:29 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state STARTED 2025-09-19 11:47:29.743014 | orchestrator | 2025-09-19 11:47:29 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:29.746600 | orchestrator | 2025-09-19 11:47:29 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:29.748571 | orchestrator | 2025-09-19 11:47:29 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:47:29.748777 | orchestrator | 2025-09-19 11:47:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:32.800568 | orchestrator | 2025-09-19 11:47:32.800705 | orchestrator | 2025-09-19 11:47:32.800731 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:47:32.801255 | orchestrator | 2025-09-19 11:47:32.801281 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:47:32.801322 | orchestrator | Friday 19 September 2025 11:44:13 +0000 (0:00:00.211) 0:00:00.211 ****** 2025-09-19 11:47:32.801342 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:47:32.801363 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:47:32.801382 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:47:32.801400 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:47:32.801419 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:47:32.801436 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:47:32.801453 | orchestrator | 2025-09-19 11:47:32.801469 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:47:32.801486 | orchestrator | Friday 19 September 2025 11:44:13 +0000 (0:00:00.567) 0:00:00.779 ****** 2025-09-19 11:47:32.801503 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-19 11:47:32.801520 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-19 11:47:32.801537 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-19 11:47:32.801555 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-19 11:47:32.801572 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-19 11:47:32.801589 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-19 11:47:32.801607 | orchestrator | 2025-09-19 11:47:32.801691 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-19 11:47:32.801729 | orchestrator | 2025-09-19 11:47:32.801741 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 11:47:32.801752 | orchestrator | Friday 19 September 2025 11:44:14 +0000 (0:00:00.513) 0:00:01.293 ****** 2025-09-19 11:47:32.801764 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:47:32.801776 | orchestrator | 2025-09-19 11:47:32.801787 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-19 11:47:32.801798 | orchestrator | Friday 19 September 2025 11:44:15 +0000 (0:00:01.002) 0:00:02.295 ****** 2025-09-19 11:47:32.801809 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-19 11:47:32.801843 | orchestrator | 2025-09-19 11:47:32.801854 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-19 11:47:32.801865 | orchestrator | Friday 19 September 2025 11:44:19 +0000 (0:00:03.708) 0:00:06.004 ****** 2025-09-19 11:47:32.801877 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-19 11:47:32.801888 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-19 11:47:32.802404 | orchestrator | 2025-09-19 11:47:32.802419 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-19 11:47:32.802430 | orchestrator | Friday 19 September 2025 11:44:26 +0000 (0:00:07.054) 0:00:13.059 ****** 2025-09-19 11:47:32.802441 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:47:32.802452 | orchestrator | 2025-09-19 11:47:32.802462 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-19 11:47:32.802473 | orchestrator | Friday 19 September 2025 11:44:29 +0000 (0:00:03.658) 0:00:16.717 ****** 2025-09-19 11:47:32.802484 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:47:32.802495 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-19 11:47:32.802505 | orchestrator | 2025-09-19 11:47:32.802516 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-19 11:47:32.802527 | orchestrator | Friday 19 September 2025 11:44:34 +0000 (0:00:04.270) 0:00:20.988 ****** 2025-09-19 11:47:32.802537 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:47:32.802548 | orchestrator | 2025-09-19 11:47:32.802559 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-19 11:47:32.802569 | orchestrator | Friday 19 September 2025 11:44:37 +0000 (0:00:03.196) 0:00:24.184 ****** 2025-09-19 11:47:32.802580 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-19 11:47:32.802591 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-19 11:47:32.802601 | orchestrator | 2025-09-19 11:47:32.802612 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-19 11:47:32.802622 | orchestrator | Friday 19 September 2025 11:44:45 +0000 (0:00:08.558) 0:00:32.743 ****** 2025-09-19 11:47:32.802638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:47:32.802721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:47:32.802750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:47:32.802869 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.802883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.802896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.802951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.802974 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.802986 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.802998 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.803010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.803022 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.803040 | orchestrator | 2025-09-19 11:47:32.803102 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 11:47:32.803123 | orchestrator | Friday 19 September 2025 11:44:48 +0000 (0:00:02.477) 0:00:35.221 ****** 2025-09-19 11:47:32.803141 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:32.803166 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:32.803185 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:32.803202 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:32.803220 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:32.803238 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:32.803255 | orchestrator | 2025-09-19 11:47:32.803272 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 11:47:32.803283 | orchestrator | Friday 19 September 2025 11:44:48 +0000 (0:00:00.538) 0:00:35.759 ****** 2025-09-19 11:47:32.803293 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:32.803304 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:32.803314 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:32.803325 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:47:32.803336 | orchestrator | 2025-09-19 11:47:32.803346 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-19 11:47:32.803357 | orchestrator | Friday 19 September 2025 11:44:49 +0000 (0:00:00.894) 0:00:36.653 ****** 2025-09-19 11:47:32.803367 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-19 11:47:32.803378 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-19 11:47:32.803389 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-19 11:47:32.803399 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-19 11:47:32.803410 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-19 11:47:32.803420 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-19 11:47:32.803431 | orchestrator | 2025-09-19 11:47:32.803441 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-19 11:47:32.803452 | orchestrator | Friday 19 September 2025 11:44:51 +0000 (0:00:01.818) 0:00:38.472 ****** 2025-09-19 11:47:32.803464 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 11:47:32.803479 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 11:47:32.803502 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 11:47:32.803580 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 11:47:32.803606 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 11:47:32.803629 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 11:47:32.803651 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 11:47:32.803680 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 11:47:32.803737 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 11:47:32.803752 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 11:47:32.803766 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 11:47:32.803779 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 11:47:32.803798 | orchestrator | 2025-09-19 11:47:32.803810 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-19 11:47:32.804011 | orchestrator | Friday 19 September 2025 11:44:56 +0000 (0:00:04.613) 0:00:43.085 ****** 2025-09-19 11:47:32.804024 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:47:32.804037 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:47:32.804051 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:47:32.804069 | orchestrator | 2025-09-19 11:47:32.804087 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-19 11:47:32.804104 | orchestrator | Friday 19 September 2025 11:44:58 +0000 (0:00:02.222) 0:00:45.307 ****** 2025-09-19 11:47:32.804122 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-19 11:47:32.804140 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-19 11:47:32.804159 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-19 11:47:32.804177 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 11:47:32.804195 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 11:47:32.804268 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 11:47:32.804282 | orchestrator | 2025-09-19 11:47:32.804293 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-19 11:47:32.804312 | orchestrator | Friday 19 September 2025 11:45:01 +0000 (0:00:03.304) 0:00:48.611 ****** 2025-09-19 11:47:32.804323 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-19 11:47:32.804334 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-19 11:47:32.804345 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-19 11:47:32.804356 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-19 11:47:32.804367 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-19 11:47:32.804377 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-19 11:47:32.804388 | orchestrator | 2025-09-19 11:47:32.804399 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-19 11:47:32.804410 | orchestrator | Friday 19 September 2025 11:45:02 +0000 (0:00:01.126) 0:00:49.738 ****** 2025-09-19 11:47:32.804421 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:32.804431 | orchestrator | 2025-09-19 11:47:32.804442 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-19 11:47:32.804453 | orchestrator | Friday 19 September 2025 11:45:03 +0000 (0:00:00.178) 0:00:49.916 ****** 2025-09-19 11:47:32.804464 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:32.804475 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:32.804486 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:32.804497 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:32.804508 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:32.804518 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:32.804529 | orchestrator | 2025-09-19 11:47:32.804540 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 11:47:32.804551 | orchestrator | Friday 19 September 2025 11:45:03 +0000 (0:00:00.694) 0:00:50.611 ****** 2025-09-19 11:47:32.804563 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:47:32.804593 | orchestrator | 2025-09-19 11:47:32.804604 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-19 11:47:32.804615 | orchestrator | Friday 19 September 2025 11:45:05 +0000 (0:00:01.290) 0:00:51.902 ****** 2025-09-19 11:47:32.804627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:47:32.804640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:47:32.804684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:47:32.804702 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.804715 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.804734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.804746 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.804757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.804799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:22025-09-19 11:47:32 | INFO  | Task bbffa5ab-7be4-4d78-98fc-e7aa6802c8ef is in state SUCCESS 2025-09-19 11:47:32.804852 | orchestrator | 024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.804872 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.804891 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.804903 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.804914 | orchestrator | 2025-09-19 11:47:32.804925 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-19 11:47:32.804936 | orchestrator | Friday 19 September 2025 11:45:08 +0000 (0:00:03.239) 0:00:55.141 ****** 2025-09-19 11:47:32.804947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:47:32.804995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805008 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:32.805026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:47:32.805044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805055 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:32.805066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:47:32.805078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805089 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:32.805100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805142 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:32.805154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805177 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:32.805188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805210 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:32.805221 | orchestrator | 2025-09-19 11:47:32.805232 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-19 11:47:32.805243 | orchestrator | Friday 19 September 2025 11:45:10 +0000 (0:00:01.912) 0:00:57.054 ****** 2025-09-19 11:47:32.805274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:47:32.805305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:47:32.805344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805364 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:32.805383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:47:32.805417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805437 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:32.805448 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:32.805460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805482 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:32.805493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805515 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:32.805538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.805568 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:32.805579 | orchestrator | 2025-09-19 11:47:32.805590 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-19 11:47:32.805601 | orchestrator | Friday 19 September 2025 11:45:12 +0000 (0:00:01.936) 0:00:58.990 ****** 2025-09-19 11:47:32.805612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:47:32.805624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:47:32.805635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:47:32.805669 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.805681 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.805693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.805704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.805715 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.805733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.805757 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.805768 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.805780 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.805791 | orchestrator | 2025-09-19 11:47:32.805801 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-19 11:47:32.805812 | orchestrator | Friday 19 September 2025 11:45:16 +0000 (0:00:03.917) 0:01:02.908 ****** 2025-09-19 11:47:32.805853 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 11:47:32.805872 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:32.805889 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 11:47:32.805908 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:32.805926 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 11:47:32.805944 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:32.805963 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-19 11:47:32.805981 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-19 11:47:32.806000 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-19 11:47:32.806079 | orchestrator | 2025-09-19 11:47:32.806104 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-19 11:47:32.806137 | orchestrator | Friday 19 September 2025 11:45:17 +0000 (0:00:01.907) 0:01:04.815 ****** 2025-09-19 11:47:32.806157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:47:32.806191 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.806204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:47:32.806215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:47:32.806227 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.806251 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.806268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.806280 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.806291 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.806302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.806321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.806332 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.806343 | orchestrator | 2025-09-19 11:47:32.806359 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-19 11:47:32.806371 | orchestrator | Friday 19 September 2025 11:45:28 +0000 (0:00:10.672) 0:01:15.488 ****** 2025-09-19 11:47:32.806382 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:32.806392 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:32.806408 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:32.806419 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:47:32.806429 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:47:32.806440 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:47:32.806450 | orchestrator | 2025-09-19 11:47:32.806461 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-19 11:47:32.806472 | orchestrator | Friday 19 September 2025 11:45:30 +0000 (0:00:02.299) 0:01:17.787 ****** 2025-09-19 11:47:32.806483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:47:32.806494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.806505 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:32.806516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:47:32.806534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.806545 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:32.806568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:47:32.806580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.806592 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:32.806603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.806614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.806633 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:32.806644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.806656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.806672 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:32.806688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.806700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:47:32.806711 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:32.806722 | orchestrator | 2025-09-19 11:47:32.806732 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-19 11:47:32.806743 | orchestrator | Friday 19 September 2025 11:45:32 +0000 (0:00:01.729) 0:01:19.516 ****** 2025-09-19 11:47:32.806760 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:32.806771 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:32.806781 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:32.806792 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:32.806803 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:32.806813 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:32.806881 | orchestrator | 2025-09-19 11:47:32.806893 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-19 11:47:32.806904 | orchestrator | Friday 19 September 2025 11:45:33 +0000 (0:00:00.629) 0:01:20.146 ****** 2025-09-19 11:47:32.806915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:47:32.806927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:47:32.806955 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.806967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:47:32.806986 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.806998 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.807009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.807031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.807043 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.807055 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.807072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.807083 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:47:32.807094 | orchestrator | 2025-09-19 11:47:32.807105 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 11:47:32.807116 | orchestrator | Friday 19 September 2025 11:45:35 +0000 (0:00:02.471) 0:01:22.618 ****** 2025-09-19 11:47:32.807127 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:32.807138 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:47:32.807149 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:47:32.807159 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:47:32.807170 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:47:32.807181 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:47:32.807191 | orchestrator | 2025-09-19 11:47:32.807202 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-19 11:47:32.807213 | orchestrator | Friday 19 September 2025 11:45:36 +0000 (0:00:01.124) 0:01:23.742 ****** 2025-09-19 11:47:32.807223 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:47:32.807234 | orchestrator | 2025-09-19 11:47:32.807245 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-19 11:47:32.807255 | orchestrator | Friday 19 September 2025 11:45:39 +0000 (0:00:02.481) 0:01:26.224 ****** 2025-09-19 11:47:32.807266 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:47:32.807276 | orchestrator | 2025-09-19 11:47:32.807287 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-19 11:47:32.807298 | orchestrator | Friday 19 September 2025 11:45:41 +0000 (0:00:02.281) 0:01:28.505 ****** 2025-09-19 11:47:32.807309 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:47:32.807319 | orchestrator | 2025-09-19 11:47:32.807333 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 11:47:32.807342 | orchestrator | Friday 19 September 2025 11:45:58 +0000 (0:00:16.332) 0:01:44.838 ****** 2025-09-19 11:47:32.807352 | orchestrator | 2025-09-19 11:47:32.807362 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 11:47:32.807378 | orchestrator | Friday 19 September 2025 11:45:58 +0000 (0:00:00.069) 0:01:44.908 ****** 2025-09-19 11:47:32.807387 | orchestrator | 2025-09-19 11:47:32.807409 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 11:47:32.807419 | orchestrator | Friday 19 September 2025 11:45:58 +0000 (0:00:00.063) 0:01:44.972 ****** 2025-09-19 11:47:32.807428 | orchestrator | 2025-09-19 11:47:32.807438 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 11:47:32.807447 | orchestrator | Friday 19 September 2025 11:45:58 +0000 (0:00:00.070) 0:01:45.043 ****** 2025-09-19 11:47:32.807457 | orchestrator | 2025-09-19 11:47:32.807466 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 11:47:32.807475 | orchestrator | Friday 19 September 2025 11:45:58 +0000 (0:00:00.066) 0:01:45.109 ****** 2025-09-19 11:47:32.807485 | orchestrator | 2025-09-19 11:47:32.807494 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 11:47:32.807504 | orchestrator | Friday 19 September 2025 11:45:58 +0000 (0:00:00.069) 0:01:45.178 ****** 2025-09-19 11:47:32.807513 | orchestrator | 2025-09-19 11:47:32.807523 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-19 11:47:32.807532 | orchestrator | Friday 19 September 2025 11:45:58 +0000 (0:00:00.067) 0:01:45.246 ****** 2025-09-19 11:47:32.807541 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:47:32.807551 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:47:32.807560 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:47:32.807570 | orchestrator | 2025-09-19 11:47:32.807580 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-19 11:47:32.807589 | orchestrator | Friday 19 September 2025 11:46:21 +0000 (0:00:23.323) 0:02:08.569 ****** 2025-09-19 11:47:32.807599 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:47:32.807608 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:47:32.807618 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:47:32.807627 | orchestrator | 2025-09-19 11:47:32.807637 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-19 11:47:32.807646 | orchestrator | Friday 19 September 2025 11:46:27 +0000 (0:00:05.541) 0:02:14.111 ****** 2025-09-19 11:47:32.807656 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:47:32.807665 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:47:32.807674 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:47:32.807684 | orchestrator | 2025-09-19 11:47:32.807694 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-19 11:47:32.807703 | orchestrator | Friday 19 September 2025 11:47:23 +0000 (0:00:56.014) 0:03:10.125 ****** 2025-09-19 11:47:32.807713 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:47:32.807722 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:47:32.807732 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:47:32.807741 | orchestrator | 2025-09-19 11:47:32.807751 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-19 11:47:32.807760 | orchestrator | Friday 19 September 2025 11:47:28 +0000 (0:00:05.402) 0:03:15.527 ****** 2025-09-19 11:47:32.807770 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:47:32.807779 | orchestrator | 2025-09-19 11:47:32.807788 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:47:32.807798 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 11:47:32.807808 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 11:47:32.807861 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 11:47:32.807873 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 11:47:32.807883 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 11:47:32.807899 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 11:47:32.807909 | orchestrator | 2025-09-19 11:47:32.807919 | orchestrator | 2025-09-19 11:47:32.807928 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:47:32.807938 | orchestrator | Friday 19 September 2025 11:47:29 +0000 (0:00:00.772) 0:03:16.299 ****** 2025-09-19 11:47:32.807947 | orchestrator | =============================================================================== 2025-09-19 11:47:32.807957 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 56.01s 2025-09-19 11:47:32.807967 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 23.32s 2025-09-19 11:47:32.807976 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 16.33s 2025-09-19 11:47:32.807984 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.67s 2025-09-19 11:47:32.807991 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.56s 2025-09-19 11:47:32.807999 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.05s 2025-09-19 11:47:32.808007 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.54s 2025-09-19 11:47:32.808020 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.40s 2025-09-19 11:47:32.808028 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.61s 2025-09-19 11:47:32.808040 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.27s 2025-09-19 11:47:32.808048 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.92s 2025-09-19 11:47:32.808056 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.71s 2025-09-19 11:47:32.808064 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.66s 2025-09-19 11:47:32.808072 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.30s 2025-09-19 11:47:32.808080 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.24s 2025-09-19 11:47:32.808088 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.20s 2025-09-19 11:47:32.808095 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.48s 2025-09-19 11:47:32.808103 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.48s 2025-09-19 11:47:32.808111 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.47s 2025-09-19 11:47:32.808119 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.30s 2025-09-19 11:47:32.808126 | orchestrator | 2025-09-19 11:47:32 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:32.808135 | orchestrator | 2025-09-19 11:47:32 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:47:32.808143 | orchestrator | 2025-09-19 11:47:32 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:32.808151 | orchestrator | 2025-09-19 11:47:32 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:47:32.808159 | orchestrator | 2025-09-19 11:47:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:35.863428 | orchestrator | 2025-09-19 11:47:35 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:35.864453 | orchestrator | 2025-09-19 11:47:35 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:47:35.865431 | orchestrator | 2025-09-19 11:47:35 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:35.867534 | orchestrator | 2025-09-19 11:47:35 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:47:35.867631 | orchestrator | 2025-09-19 11:47:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:38.908072 | orchestrator | 2025-09-19 11:47:38 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:38.909312 | orchestrator | 2025-09-19 11:47:38 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:47:38.911037 | orchestrator | 2025-09-19 11:47:38 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:38.912753 | orchestrator | 2025-09-19 11:47:38 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:47:38.912895 | orchestrator | 2025-09-19 11:47:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:41.955224 | orchestrator | 2025-09-19 11:47:41 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:41.955973 | orchestrator | 2025-09-19 11:47:41 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:47:41.958402 | orchestrator | 2025-09-19 11:47:41 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:41.959724 | orchestrator | 2025-09-19 11:47:41 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:47:41.959757 | orchestrator | 2025-09-19 11:47:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:45.009567 | orchestrator | 2025-09-19 11:47:45 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:45.009757 | orchestrator | 2025-09-19 11:47:45 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:47:45.011194 | orchestrator | 2025-09-19 11:47:45 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:45.012340 | orchestrator | 2025-09-19 11:47:45 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:47:45.012372 | orchestrator | 2025-09-19 11:47:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:48.074577 | orchestrator | 2025-09-19 11:47:48 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:48.074715 | orchestrator | 2025-09-19 11:47:48 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:47:48.075432 | orchestrator | 2025-09-19 11:47:48 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:48.076385 | orchestrator | 2025-09-19 11:47:48 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:47:48.076419 | orchestrator | 2025-09-19 11:47:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:51.120070 | orchestrator | 2025-09-19 11:47:51 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:51.122169 | orchestrator | 2025-09-19 11:47:51 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:47:51.123197 | orchestrator | 2025-09-19 11:47:51 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:51.124700 | orchestrator | 2025-09-19 11:47:51 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:47:51.125593 | orchestrator | 2025-09-19 11:47:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:54.160091 | orchestrator | 2025-09-19 11:47:54 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:54.160206 | orchestrator | 2025-09-19 11:47:54 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:47:54.160779 | orchestrator | 2025-09-19 11:47:54 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:54.161001 | orchestrator | 2025-09-19 11:47:54 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:47:54.161273 | orchestrator | 2025-09-19 11:47:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:57.207350 | orchestrator | 2025-09-19 11:47:57 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:47:57.209170 | orchestrator | 2025-09-19 11:47:57 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:47:57.210068 | orchestrator | 2025-09-19 11:47:57 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:47:57.211205 | orchestrator | 2025-09-19 11:47:57 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:47:57.211281 | orchestrator | 2025-09-19 11:47:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:00.241742 | orchestrator | 2025-09-19 11:48:00 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:00.243490 | orchestrator | 2025-09-19 11:48:00 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:00.244975 | orchestrator | 2025-09-19 11:48:00 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:00.246233 | orchestrator | 2025-09-19 11:48:00 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:00.246375 | orchestrator | 2025-09-19 11:48:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:03.274745 | orchestrator | 2025-09-19 11:48:03 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:03.276106 | orchestrator | 2025-09-19 11:48:03 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:03.276146 | orchestrator | 2025-09-19 11:48:03 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:03.278223 | orchestrator | 2025-09-19 11:48:03 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:03.278250 | orchestrator | 2025-09-19 11:48:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:06.348593 | orchestrator | 2025-09-19 11:48:06 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:06.354303 | orchestrator | 2025-09-19 11:48:06 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:06.359817 | orchestrator | 2025-09-19 11:48:06 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:06.360712 | orchestrator | 2025-09-19 11:48:06 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:06.360757 | orchestrator | 2025-09-19 11:48:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:09.395311 | orchestrator | 2025-09-19 11:48:09 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:09.395831 | orchestrator | 2025-09-19 11:48:09 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:09.396354 | orchestrator | 2025-09-19 11:48:09 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:09.397071 | orchestrator | 2025-09-19 11:48:09 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:09.397105 | orchestrator | 2025-09-19 11:48:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:12.435323 | orchestrator | 2025-09-19 11:48:12 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:12.435535 | orchestrator | 2025-09-19 11:48:12 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:12.436108 | orchestrator | 2025-09-19 11:48:12 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:12.436683 | orchestrator | 2025-09-19 11:48:12 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:12.436731 | orchestrator | 2025-09-19 11:48:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:15.469329 | orchestrator | 2025-09-19 11:48:15 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:15.469796 | orchestrator | 2025-09-19 11:48:15 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:15.470581 | orchestrator | 2025-09-19 11:48:15 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:15.471419 | orchestrator | 2025-09-19 11:48:15 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:15.471448 | orchestrator | 2025-09-19 11:48:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:18.497092 | orchestrator | 2025-09-19 11:48:18 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:18.497510 | orchestrator | 2025-09-19 11:48:18 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:18.498312 | orchestrator | 2025-09-19 11:48:18 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:18.499051 | orchestrator | 2025-09-19 11:48:18 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:18.499076 | orchestrator | 2025-09-19 11:48:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:21.537616 | orchestrator | 2025-09-19 11:48:21 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:21.537704 | orchestrator | 2025-09-19 11:48:21 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:21.538542 | orchestrator | 2025-09-19 11:48:21 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:21.539345 | orchestrator | 2025-09-19 11:48:21 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:21.540469 | orchestrator | 2025-09-19 11:48:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:24.571268 | orchestrator | 2025-09-19 11:48:24 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:24.573018 | orchestrator | 2025-09-19 11:48:24 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:24.573656 | orchestrator | 2025-09-19 11:48:24 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:24.574584 | orchestrator | 2025-09-19 11:48:24 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:24.574604 | orchestrator | 2025-09-19 11:48:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:27.597166 | orchestrator | 2025-09-19 11:48:27 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:27.597558 | orchestrator | 2025-09-19 11:48:27 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:27.598668 | orchestrator | 2025-09-19 11:48:27 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:27.600424 | orchestrator | 2025-09-19 11:48:27 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:27.600478 | orchestrator | 2025-09-19 11:48:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:30.628974 | orchestrator | 2025-09-19 11:48:30 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:30.629344 | orchestrator | 2025-09-19 11:48:30 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:30.629820 | orchestrator | 2025-09-19 11:48:30 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:30.630599 | orchestrator | 2025-09-19 11:48:30 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:30.630645 | orchestrator | 2025-09-19 11:48:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:33.652761 | orchestrator | 2025-09-19 11:48:33 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:33.653645 | orchestrator | 2025-09-19 11:48:33 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:33.654593 | orchestrator | 2025-09-19 11:48:33 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:33.655671 | orchestrator | 2025-09-19 11:48:33 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:33.655709 | orchestrator | 2025-09-19 11:48:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:36.694657 | orchestrator | 2025-09-19 11:48:36 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:36.695035 | orchestrator | 2025-09-19 11:48:36 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:36.695593 | orchestrator | 2025-09-19 11:48:36 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:36.696340 | orchestrator | 2025-09-19 11:48:36 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:36.697080 | orchestrator | 2025-09-19 11:48:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:39.719411 | orchestrator | 2025-09-19 11:48:39 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:39.720909 | orchestrator | 2025-09-19 11:48:39 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:39.721427 | orchestrator | 2025-09-19 11:48:39 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:39.722205 | orchestrator | 2025-09-19 11:48:39 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:39.722272 | orchestrator | 2025-09-19 11:48:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:42.752775 | orchestrator | 2025-09-19 11:48:42 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:42.757345 | orchestrator | 2025-09-19 11:48:42 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:42.757377 | orchestrator | 2025-09-19 11:48:42 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:42.757389 | orchestrator | 2025-09-19 11:48:42 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:42.757401 | orchestrator | 2025-09-19 11:48:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:45.782075 | orchestrator | 2025-09-19 11:48:45 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:45.782297 | orchestrator | 2025-09-19 11:48:45 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:45.782928 | orchestrator | 2025-09-19 11:48:45 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:45.783679 | orchestrator | 2025-09-19 11:48:45 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:45.783761 | orchestrator | 2025-09-19 11:48:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:48.819476 | orchestrator | 2025-09-19 11:48:48 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:48.819609 | orchestrator | 2025-09-19 11:48:48 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:48.820245 | orchestrator | 2025-09-19 11:48:48 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:48.820755 | orchestrator | 2025-09-19 11:48:48 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:48.820917 | orchestrator | 2025-09-19 11:48:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:51.852920 | orchestrator | 2025-09-19 11:48:51 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:51.853132 | orchestrator | 2025-09-19 11:48:51 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:51.855018 | orchestrator | 2025-09-19 11:48:51 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:51.855028 | orchestrator | 2025-09-19 11:48:51 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:51.855041 | orchestrator | 2025-09-19 11:48:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:54.892465 | orchestrator | 2025-09-19 11:48:54 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:54.892559 | orchestrator | 2025-09-19 11:48:54 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:54.894242 | orchestrator | 2025-09-19 11:48:54 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:54.894932 | orchestrator | 2025-09-19 11:48:54 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:54.894962 | orchestrator | 2025-09-19 11:48:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:57.929036 | orchestrator | 2025-09-19 11:48:57 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:48:57.929372 | orchestrator | 2025-09-19 11:48:57 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:48:57.929830 | orchestrator | 2025-09-19 11:48:57 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:48:57.930529 | orchestrator | 2025-09-19 11:48:57 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:48:57.930564 | orchestrator | 2025-09-19 11:48:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:00.966366 | orchestrator | 2025-09-19 11:49:00 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:00.966872 | orchestrator | 2025-09-19 11:49:00 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:00.967588 | orchestrator | 2025-09-19 11:49:00 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:00.968408 | orchestrator | 2025-09-19 11:49:00 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:49:00.968434 | orchestrator | 2025-09-19 11:49:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:03.991119 | orchestrator | 2025-09-19 11:49:03 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:03.992142 | orchestrator | 2025-09-19 11:49:03 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:03.992866 | orchestrator | 2025-09-19 11:49:03 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:03.993526 | orchestrator | 2025-09-19 11:49:03 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:49:03.993537 | orchestrator | 2025-09-19 11:49:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:07.064625 | orchestrator | 2025-09-19 11:49:07 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:07.064936 | orchestrator | 2025-09-19 11:49:07 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:07.065672 | orchestrator | 2025-09-19 11:49:07 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:07.066448 | orchestrator | 2025-09-19 11:49:07 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:49:07.066483 | orchestrator | 2025-09-19 11:49:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:10.091781 | orchestrator | 2025-09-19 11:49:10 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:10.092182 | orchestrator | 2025-09-19 11:49:10 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:10.093265 | orchestrator | 2025-09-19 11:49:10 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:10.094085 | orchestrator | 2025-09-19 11:49:10 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:49:10.094116 | orchestrator | 2025-09-19 11:49:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:13.116561 | orchestrator | 2025-09-19 11:49:13 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:13.116707 | orchestrator | 2025-09-19 11:49:13 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:13.117143 | orchestrator | 2025-09-19 11:49:13 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:13.117804 | orchestrator | 2025-09-19 11:49:13 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:49:13.117827 | orchestrator | 2025-09-19 11:49:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:16.143134 | orchestrator | 2025-09-19 11:49:16 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:16.143396 | orchestrator | 2025-09-19 11:49:16 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:16.144129 | orchestrator | 2025-09-19 11:49:16 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:16.144647 | orchestrator | 2025-09-19 11:49:16 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:49:16.144790 | orchestrator | 2025-09-19 11:49:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:19.173847 | orchestrator | 2025-09-19 11:49:19 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:19.174101 | orchestrator | 2025-09-19 11:49:19 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:19.174417 | orchestrator | 2025-09-19 11:49:19 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:19.175026 | orchestrator | 2025-09-19 11:49:19 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:49:19.175071 | orchestrator | 2025-09-19 11:49:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:22.214389 | orchestrator | 2025-09-19 11:49:22 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:22.215165 | orchestrator | 2025-09-19 11:49:22 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:22.215923 | orchestrator | 2025-09-19 11:49:22 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:22.217553 | orchestrator | 2025-09-19 11:49:22 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state STARTED 2025-09-19 11:49:22.217574 | orchestrator | 2025-09-19 11:49:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:25.263946 | orchestrator | 2025-09-19 11:49:25 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:49:25.265103 | orchestrator | 2025-09-19 11:49:25 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:25.266337 | orchestrator | 2025-09-19 11:49:25 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:25.267920 | orchestrator | 2025-09-19 11:49:25 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:25.269769 | orchestrator | 2025-09-19 11:49:25 | INFO  | Task 21ce88ca-5dbc-4ab5-8866-13f4247d71f0 is in state SUCCESS 2025-09-19 11:49:25.269976 | orchestrator | 2025-09-19 11:49:25.271811 | orchestrator | 2025-09-19 11:49:25.271856 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:49:25.271870 | orchestrator | 2025-09-19 11:49:25.272192 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:49:25.272219 | orchestrator | Friday 19 September 2025 11:47:23 +0000 (0:00:00.253) 0:00:00.253 ****** 2025-09-19 11:49:25.272231 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:49:25.272244 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:49:25.272255 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:49:25.272266 | orchestrator | 2025-09-19 11:49:25.272277 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:49:25.272288 | orchestrator | Friday 19 September 2025 11:47:23 +0000 (0:00:00.289) 0:00:00.542 ****** 2025-09-19 11:49:25.272300 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-19 11:49:25.272312 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-19 11:49:25.272323 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-19 11:49:25.272334 | orchestrator | 2025-09-19 11:49:25.272345 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-19 11:49:25.272356 | orchestrator | 2025-09-19 11:49:25.272367 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 11:49:25.272379 | orchestrator | Friday 19 September 2025 11:47:23 +0000 (0:00:00.370) 0:00:00.913 ****** 2025-09-19 11:49:25.272390 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:49:25.272401 | orchestrator | 2025-09-19 11:49:25.272412 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-19 11:49:25.272424 | orchestrator | Friday 19 September 2025 11:47:24 +0000 (0:00:00.580) 0:00:01.493 ****** 2025-09-19 11:49:25.272435 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-19 11:49:25.272446 | orchestrator | 2025-09-19 11:49:25.272458 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-19 11:49:25.272468 | orchestrator | Friday 19 September 2025 11:47:27 +0000 (0:00:03.500) 0:00:04.993 ****** 2025-09-19 11:49:25.272479 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-19 11:49:25.272542 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-19 11:49:25.272593 | orchestrator | 2025-09-19 11:49:25.272613 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-19 11:49:25.272631 | orchestrator | Friday 19 September 2025 11:47:34 +0000 (0:00:06.259) 0:00:11.254 ****** 2025-09-19 11:49:25.272762 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:49:25.272791 | orchestrator | 2025-09-19 11:49:25.272810 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-19 11:49:25.272828 | orchestrator | Friday 19 September 2025 11:47:37 +0000 (0:00:03.473) 0:00:14.727 ****** 2025-09-19 11:49:25.272845 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:49:25.272864 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-19 11:49:25.272883 | orchestrator | 2025-09-19 11:49:25.272901 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-19 11:49:25.272918 | orchestrator | Friday 19 September 2025 11:47:41 +0000 (0:00:03.831) 0:00:18.559 ****** 2025-09-19 11:49:25.272937 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:49:25.272957 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-19 11:49:25.272977 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-19 11:49:25.272995 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-19 11:49:25.273013 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-19 11:49:25.273030 | orchestrator | 2025-09-19 11:49:25.273048 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-19 11:49:25.273067 | orchestrator | Friday 19 September 2025 11:47:58 +0000 (0:00:16.744) 0:00:35.304 ****** 2025-09-19 11:49:25.273084 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-19 11:49:25.273102 | orchestrator | 2025-09-19 11:49:25.273120 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-19 11:49:25.273141 | orchestrator | Friday 19 September 2025 11:48:02 +0000 (0:00:04.704) 0:00:40.008 ****** 2025-09-19 11:49:25.273164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:25.273207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:25.273229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.273282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:25.273305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.273327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.273361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.273405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.273440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.273459 | orchestrator | 2025-09-19 11:49:25.273479 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-19 11:49:25.273498 | orchestrator | Friday 19 September 2025 11:48:05 +0000 (0:00:02.870) 0:00:42.878 ****** 2025-09-19 11:49:25.273517 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-19 11:49:25.273534 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-19 11:49:25.273552 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-19 11:49:25.273652 | orchestrator | 2025-09-19 11:49:25.273689 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-19 11:49:25.273709 | orchestrator | Friday 19 September 2025 11:48:07 +0000 (0:00:01.271) 0:00:44.150 ****** 2025-09-19 11:49:25.273720 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:25.273731 | orchestrator | 2025-09-19 11:49:25.273742 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-19 11:49:25.273755 | orchestrator | Friday 19 September 2025 11:48:07 +0000 (0:00:00.109) 0:00:44.260 ****** 2025-09-19 11:49:25.273774 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:25.273792 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:25.273810 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:25.273827 | orchestrator | 2025-09-19 11:49:25.273845 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 11:49:25.273863 | orchestrator | Friday 19 September 2025 11:48:07 +0000 (0:00:00.332) 0:00:44.593 ****** 2025-09-19 11:49:25.273882 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:49:25.273902 | orchestrator | 2025-09-19 11:49:25.273920 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-19 11:49:25.273938 | orchestrator | Friday 19 September 2025 11:48:08 +0000 (0:00:00.810) 0:00:45.403 ****** 2025-09-19 11:49:25.273959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:25.274101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:25.274137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:25.274156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.274169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.274180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.274192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.274212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.274231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.274242 | orchestrator | 2025-09-19 11:49:25.274253 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-19 11:49:25.274264 | orchestrator | Friday 19 September 2025 11:48:12 +0000 (0:00:04.356) 0:00:49.760 ****** 2025-09-19 11:49:25.274280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:49:25.274292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.274304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.274315 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:25.274334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:49:25.274354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.274366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.274377 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:25.274394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:49:25.274406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.274417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.274435 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:25.274446 | orchestrator | 2025-09-19 11:49:25.274457 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-19 11:49:25.274467 | orchestrator | Friday 19 September 2025 11:48:13 +0000 (0:00:01.266) 0:00:51.027 ****** 2025-09-19 11:49:25.274485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:49:25.274497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.274509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.274524 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:25.274536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:49:25.274548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.274572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.274583 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:25.274602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:49:25.274614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.274630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.274641 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:25.274652 | orchestrator | 2025-09-19 11:49:25.274683 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-19 11:49:25.274694 | orchestrator | Friday 19 September 2025 11:48:14 +0000 (0:00:01.062) 0:00:52.090 ****** 2025-09-19 11:49:25.274706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:25.274730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:25.274742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:25.274754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.274770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.274781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.274799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.274817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.274829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.274840 | orchestrator | 2025-09-19 11:49:25.274851 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-19 11:49:25.274862 | orchestrator | Friday 19 September 2025 11:48:19 +0000 (0:00:04.373) 0:00:56.464 ****** 2025-09-19 11:49:25.274873 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:25.274884 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:49:25.274895 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:49:25.274905 | orchestrator | 2025-09-19 11:49:25.274916 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-19 11:49:25.274939 | orchestrator | Friday 19 September 2025 11:48:23 +0000 (0:00:03.949) 0:01:00.413 ****** 2025-09-19 11:49:25.274951 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:49:25.274962 | orchestrator | 2025-09-19 11:49:25.274973 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-19 11:49:25.274983 | orchestrator | Friday 19 September 2025 11:48:25 +0000 (0:00:01.742) 0:01:02.155 ****** 2025-09-19 11:49:25.274994 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:25.275005 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:25.275015 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:25.275026 | orchestrator | 2025-09-19 11:49:25.275036 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-19 11:49:25.275047 | orchestrator | Friday 19 September 2025 11:48:25 +0000 (0:00:00.613) 0:01:02.769 ****** 2025-09-19 11:49:25.275063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:25.275082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:25.275101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:25.275113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.275164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.275181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.275199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.275211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.275222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.275234 | orchestrator | 2025-09-19 11:49:25.275245 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-19 11:49:25.275256 | orchestrator | Friday 19 September 2025 11:48:36 +0000 (0:00:10.608) 0:01:13.377 ****** 2025-09-19 11:49:25.275275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:49:25.275286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.275303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.275321 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:25.275333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:49:25.275344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.275362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.275374 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:25.275385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:49:25.275434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.275453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:49:25.275465 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:25.275476 | orchestrator | 2025-09-19 11:49:25.275487 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-19 11:49:25.275498 | orchestrator | Friday 19 September 2025 11:48:37 +0000 (0:00:01.351) 0:01:14.728 ****** 2025-09-19 11:49:25.275509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:25.275529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:25.275541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:25.275563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.275575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.275586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.275597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.275618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.275629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:25.275641 | orchestrator | 2025-09-19 11:49:25.275652 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 11:49:25.275690 | orchestrator | Friday 19 September 2025 11:48:41 +0000 (0:00:03.697) 0:01:18.426 ****** 2025-09-19 11:49:25.275702 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:25.275713 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:25.275723 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:25.275734 | orchestrator | 2025-09-19 11:49:25.275745 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-19 11:49:25.275756 | orchestrator | Friday 19 September 2025 11:48:41 +0000 (0:00:00.356) 0:01:18.783 ****** 2025-09-19 11:49:25.275766 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:25.275777 | orchestrator | 2025-09-19 11:49:25.275788 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-19 11:49:25.275798 | orchestrator | Friday 19 September 2025 11:48:43 +0000 (0:00:02.285) 0:01:21.068 ****** 2025-09-19 11:49:25.275809 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:25.275820 | orchestrator | 2025-09-19 11:49:25.275840 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-19 11:49:25.275851 | orchestrator | Friday 19 September 2025 11:48:46 +0000 (0:00:02.643) 0:01:23.712 ****** 2025-09-19 11:49:25.275867 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:25.275878 | orchestrator | 2025-09-19 11:49:25.275889 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 11:49:25.275899 | orchestrator | Friday 19 September 2025 11:48:57 +0000 (0:00:11.343) 0:01:35.055 ****** 2025-09-19 11:49:25.275910 | orchestrator | 2025-09-19 11:49:25.275921 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 11:49:25.275931 | orchestrator | Friday 19 September 2025 11:48:58 +0000 (0:00:00.079) 0:01:35.134 ****** 2025-09-19 11:49:25.275942 | orchestrator | 2025-09-19 11:49:25.275952 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 11:49:25.275963 | orchestrator | Friday 19 September 2025 11:48:58 +0000 (0:00:00.050) 0:01:35.184 ****** 2025-09-19 11:49:25.275974 | orchestrator | 2025-09-19 11:49:25.275984 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-19 11:49:25.275995 | orchestrator | Friday 19 September 2025 11:48:58 +0000 (0:00:00.074) 0:01:35.259 ****** 2025-09-19 11:49:25.276006 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:25.276046 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:49:25.276058 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:49:25.276069 | orchestrator | 2025-09-19 11:49:25.276080 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-19 11:49:25.276091 | orchestrator | Friday 19 September 2025 11:49:10 +0000 (0:00:12.449) 0:01:47.709 ****** 2025-09-19 11:49:25.276102 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:49:25.276112 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:49:25.276123 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:25.276134 | orchestrator | 2025-09-19 11:49:25.276145 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-19 11:49:25.276155 | orchestrator | Friday 19 September 2025 11:49:18 +0000 (0:00:08.404) 0:01:56.113 ****** 2025-09-19 11:49:25.276166 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:25.276177 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:49:25.276187 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:49:25.276198 | orchestrator | 2025-09-19 11:49:25.276209 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:49:25.276220 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 11:49:25.276233 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:49:25.276244 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:49:25.276261 | orchestrator | 2025-09-19 11:49:25.276272 | orchestrator | 2025-09-19 11:49:25.276283 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:49:25.276294 | orchestrator | Friday 19 September 2025 11:49:23 +0000 (0:00:04.751) 0:02:00.865 ****** 2025-09-19 11:49:25.276337 | orchestrator | =============================================================================== 2025-09-19 11:49:25.276350 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.74s 2025-09-19 11:49:25.276368 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.45s 2025-09-19 11:49:25.276379 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.34s 2025-09-19 11:49:25.276390 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.61s 2025-09-19 11:49:25.276401 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 8.40s 2025-09-19 11:49:25.276412 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.26s 2025-09-19 11:49:25.276422 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 4.75s 2025-09-19 11:49:25.276433 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.70s 2025-09-19 11:49:25.276444 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.37s 2025-09-19 11:49:25.276455 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.36s 2025-09-19 11:49:25.276465 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.95s 2025-09-19 11:49:25.276476 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.83s 2025-09-19 11:49:25.276487 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.70s 2025-09-19 11:49:25.276498 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.50s 2025-09-19 11:49:25.276508 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.47s 2025-09-19 11:49:25.276519 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.87s 2025-09-19 11:49:25.276530 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.64s 2025-09-19 11:49:25.276541 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.29s 2025-09-19 11:49:25.276551 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.74s 2025-09-19 11:49:25.276562 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.35s 2025-09-19 11:49:25.276573 | orchestrator | 2025-09-19 11:49:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:28.309250 | orchestrator | 2025-09-19 11:49:28 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:49:28.309461 | orchestrator | 2025-09-19 11:49:28 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:28.310551 | orchestrator | 2025-09-19 11:49:28 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:28.311384 | orchestrator | 2025-09-19 11:49:28 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:28.311581 | orchestrator | 2025-09-19 11:49:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:31.338281 | orchestrator | 2025-09-19 11:49:31 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:49:31.341293 | orchestrator | 2025-09-19 11:49:31 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:31.345570 | orchestrator | 2025-09-19 11:49:31 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:31.347437 | orchestrator | 2025-09-19 11:49:31 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:31.347717 | orchestrator | 2025-09-19 11:49:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:34.377510 | orchestrator | 2025-09-19 11:49:34 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:49:34.377743 | orchestrator | 2025-09-19 11:49:34 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:34.378357 | orchestrator | 2025-09-19 11:49:34 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:34.378969 | orchestrator | 2025-09-19 11:49:34 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:34.378992 | orchestrator | 2025-09-19 11:49:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:37.428722 | orchestrator | 2025-09-19 11:49:37 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:49:37.431039 | orchestrator | 2025-09-19 11:49:37 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:37.432089 | orchestrator | 2025-09-19 11:49:37 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:37.435168 | orchestrator | 2025-09-19 11:49:37 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:37.435203 | orchestrator | 2025-09-19 11:49:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:40.483274 | orchestrator | 2025-09-19 11:49:40 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:49:40.483455 | orchestrator | 2025-09-19 11:49:40 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:40.484462 | orchestrator | 2025-09-19 11:49:40 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:40.488881 | orchestrator | 2025-09-19 11:49:40 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:40.488904 | orchestrator | 2025-09-19 11:49:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:43.522313 | orchestrator | 2025-09-19 11:49:43 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:49:43.523429 | orchestrator | 2025-09-19 11:49:43 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:43.524751 | orchestrator | 2025-09-19 11:49:43 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:43.526319 | orchestrator | 2025-09-19 11:49:43 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:43.527431 | orchestrator | 2025-09-19 11:49:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:46.567929 | orchestrator | 2025-09-19 11:49:46 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:49:46.568165 | orchestrator | 2025-09-19 11:49:46 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:46.568856 | orchestrator | 2025-09-19 11:49:46 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:46.570825 | orchestrator | 2025-09-19 11:49:46 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:46.570856 | orchestrator | 2025-09-19 11:49:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:49.608556 | orchestrator | 2025-09-19 11:49:49 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:49:49.608707 | orchestrator | 2025-09-19 11:49:49 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:49.608736 | orchestrator | 2025-09-19 11:49:49 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:49.610701 | orchestrator | 2025-09-19 11:49:49 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:49.610722 | orchestrator | 2025-09-19 11:49:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:52.651415 | orchestrator | 2025-09-19 11:49:52 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:49:52.651530 | orchestrator | 2025-09-19 11:49:52 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:52.655548 | orchestrator | 2025-09-19 11:49:52 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:52.656496 | orchestrator | 2025-09-19 11:49:52 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:52.656526 | orchestrator | 2025-09-19 11:49:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:55.686442 | orchestrator | 2025-09-19 11:49:55 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:49:55.686551 | orchestrator | 2025-09-19 11:49:55 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:55.687138 | orchestrator | 2025-09-19 11:49:55 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:55.688124 | orchestrator | 2025-09-19 11:49:55 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:55.688167 | orchestrator | 2025-09-19 11:49:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:58.722962 | orchestrator | 2025-09-19 11:49:58 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:49:58.723173 | orchestrator | 2025-09-19 11:49:58 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:49:58.724055 | orchestrator | 2025-09-19 11:49:58 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:49:58.724820 | orchestrator | 2025-09-19 11:49:58 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:49:58.724844 | orchestrator | 2025-09-19 11:49:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:01.759567 | orchestrator | 2025-09-19 11:50:01 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:50:01.760850 | orchestrator | 2025-09-19 11:50:01 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:01.762751 | orchestrator | 2025-09-19 11:50:01 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:50:01.764323 | orchestrator | 2025-09-19 11:50:01 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:01.764572 | orchestrator | 2025-09-19 11:50:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:04.808334 | orchestrator | 2025-09-19 11:50:04 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:50:04.808440 | orchestrator | 2025-09-19 11:50:04 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:04.808455 | orchestrator | 2025-09-19 11:50:04 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:50:04.809140 | orchestrator | 2025-09-19 11:50:04 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:04.809181 | orchestrator | 2025-09-19 11:50:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:07.841441 | orchestrator | 2025-09-19 11:50:07 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:50:07.841539 | orchestrator | 2025-09-19 11:50:07 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:07.842109 | orchestrator | 2025-09-19 11:50:07 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:50:07.842764 | orchestrator | 2025-09-19 11:50:07 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:07.842801 | orchestrator | 2025-09-19 11:50:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:10.865499 | orchestrator | 2025-09-19 11:50:10 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:50:10.866454 | orchestrator | 2025-09-19 11:50:10 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:10.868044 | orchestrator | 2025-09-19 11:50:10 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:50:10.868995 | orchestrator | 2025-09-19 11:50:10 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:10.869035 | orchestrator | 2025-09-19 11:50:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:13.895451 | orchestrator | 2025-09-19 11:50:13 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:50:13.895700 | orchestrator | 2025-09-19 11:50:13 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:13.896432 | orchestrator | 2025-09-19 11:50:13 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:50:13.897079 | orchestrator | 2025-09-19 11:50:13 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:13.897102 | orchestrator | 2025-09-19 11:50:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:16.918406 | orchestrator | 2025-09-19 11:50:16 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:50:16.918502 | orchestrator | 2025-09-19 11:50:16 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:16.918954 | orchestrator | 2025-09-19 11:50:16 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:50:16.919577 | orchestrator | 2025-09-19 11:50:16 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:16.919623 | orchestrator | 2025-09-19 11:50:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:19.943974 | orchestrator | 2025-09-19 11:50:19 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:50:19.944064 | orchestrator | 2025-09-19 11:50:19 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:19.944377 | orchestrator | 2025-09-19 11:50:19 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:50:19.944776 | orchestrator | 2025-09-19 11:50:19 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:19.944799 | orchestrator | 2025-09-19 11:50:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:22.977545 | orchestrator | 2025-09-19 11:50:22 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:50:22.978160 | orchestrator | 2025-09-19 11:50:22 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:22.979070 | orchestrator | 2025-09-19 11:50:22 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:50:22.979710 | orchestrator | 2025-09-19 11:50:22 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:22.979750 | orchestrator | 2025-09-19 11:50:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:26.031072 | orchestrator | 2025-09-19 11:50:26 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:50:26.032875 | orchestrator | 2025-09-19 11:50:26 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:26.036727 | orchestrator | 2025-09-19 11:50:26 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:50:26.039013 | orchestrator | 2025-09-19 11:50:26 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:26.039466 | orchestrator | 2025-09-19 11:50:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:29.101788 | orchestrator | 2025-09-19 11:50:29 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:50:29.103780 | orchestrator | 2025-09-19 11:50:29 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:29.105754 | orchestrator | 2025-09-19 11:50:29 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:50:29.106869 | orchestrator | 2025-09-19 11:50:29 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:29.106896 | orchestrator | 2025-09-19 11:50:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:32.141278 | orchestrator | 2025-09-19 11:50:32 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:50:32.142407 | orchestrator | 2025-09-19 11:50:32 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:32.143402 | orchestrator | 2025-09-19 11:50:32 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state STARTED 2025-09-19 11:50:32.144902 | orchestrator | 2025-09-19 11:50:32 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:32.144932 | orchestrator | 2025-09-19 11:50:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:35.189007 | orchestrator | 2025-09-19 11:50:35 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:50:35.191274 | orchestrator | 2025-09-19 11:50:35 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:50:35.193266 | orchestrator | 2025-09-19 11:50:35 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:35.197093 | orchestrator | 2025-09-19 11:50:35 | INFO  | Task 41748e24-64dd-46b7-9ac7-5c169eece8bd is in state SUCCESS 2025-09-19 11:50:35.199163 | orchestrator | 2025-09-19 11:50:35.199198 | orchestrator | 2025-09-19 11:50:35.199212 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:50:35.199226 | orchestrator | 2025-09-19 11:50:35.199238 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:50:35.199250 | orchestrator | Friday 19 September 2025 11:47:35 +0000 (0:00:00.275) 0:00:00.275 ****** 2025-09-19 11:50:35.199262 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:50:35.199274 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:50:35.199285 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:50:35.199296 | orchestrator | 2025-09-19 11:50:35.199307 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:50:35.199318 | orchestrator | Friday 19 September 2025 11:47:35 +0000 (0:00:00.384) 0:00:00.660 ****** 2025-09-19 11:50:35.199330 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-19 11:50:35.199341 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-19 11:50:35.199352 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-19 11:50:35.199363 | orchestrator | 2025-09-19 11:50:35.199374 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-19 11:50:35.199410 | orchestrator | 2025-09-19 11:50:35.199422 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 11:50:35.199433 | orchestrator | Friday 19 September 2025 11:47:35 +0000 (0:00:00.480) 0:00:01.140 ****** 2025-09-19 11:50:35.199444 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:50:35.199456 | orchestrator | 2025-09-19 11:50:35.199467 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-19 11:50:35.199477 | orchestrator | Friday 19 September 2025 11:47:37 +0000 (0:00:01.266) 0:00:02.407 ****** 2025-09-19 11:50:35.199488 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-19 11:50:35.199498 | orchestrator | 2025-09-19 11:50:35.199509 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-19 11:50:35.199989 | orchestrator | Friday 19 September 2025 11:47:40 +0000 (0:00:03.426) 0:00:05.833 ****** 2025-09-19 11:50:35.200008 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-19 11:50:35.200019 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-19 11:50:35.200030 | orchestrator | 2025-09-19 11:50:35.200041 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-19 11:50:35.200052 | orchestrator | Friday 19 September 2025 11:47:47 +0000 (0:00:06.881) 0:00:12.714 ****** 2025-09-19 11:50:35.200063 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:50:35.200074 | orchestrator | 2025-09-19 11:50:35.200085 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-19 11:50:35.200096 | orchestrator | Friday 19 September 2025 11:47:50 +0000 (0:00:03.395) 0:00:16.110 ****** 2025-09-19 11:50:35.200106 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:50:35.200152 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-19 11:50:35.200166 | orchestrator | 2025-09-19 11:50:35.200177 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-19 11:50:35.200187 | orchestrator | Friday 19 September 2025 11:47:54 +0000 (0:00:04.079) 0:00:20.189 ****** 2025-09-19 11:50:35.200198 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:50:35.200209 | orchestrator | 2025-09-19 11:50:35.200219 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-19 11:50:35.200481 | orchestrator | Friday 19 September 2025 11:47:58 +0000 (0:00:03.560) 0:00:23.749 ****** 2025-09-19 11:50:35.200539 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-19 11:50:35.200551 | orchestrator | 2025-09-19 11:50:35.200562 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-19 11:50:35.200597 | orchestrator | Friday 19 September 2025 11:48:03 +0000 (0:00:04.771) 0:00:28.521 ****** 2025-09-19 11:50:35.200701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:50:35.200733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:50:35.200774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:50:35.200787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.200800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.200812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.200829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.200860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.200872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.200884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.200897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.200908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.200919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.200936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.200961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.200973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.200985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.200996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.201007 | orchestrator | 2025-09-19 11:50:35.201018 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-19 11:50:35.201029 | orchestrator | Friday 19 September 2025 11:48:07 +0000 (0:00:04.557) 0:00:33.078 ****** 2025-09-19 11:50:35.201040 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:50:35.201051 | orchestrator | 2025-09-19 11:50:35.201062 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-19 11:50:35.201073 | orchestrator | Friday 19 September 2025 11:48:08 +0000 (0:00:00.379) 0:00:33.457 ****** 2025-09-19 11:50:35.201083 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:50:35.201095 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:50:35.201107 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:50:35.201119 | orchestrator | 2025-09-19 11:50:35.201131 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 11:50:35.201143 | orchestrator | Friday 19 September 2025 11:48:09 +0000 (0:00:00.766) 0:00:34.224 ****** 2025-09-19 11:50:35.201155 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:50:35.201167 | orchestrator | 2025-09-19 11:50:35.201178 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-19 11:50:35.201198 | orchestrator | Friday 19 September 2025 11:48:09 +0000 (0:00:00.922) 0:00:35.147 ****** 2025-09-19 11:50:35.201216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:50:35.201238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:50:35.201251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:50:35.201263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.201274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.201297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.201309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.201327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.201339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.201350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.201361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.201373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.201396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.201414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.201426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.201438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.201449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.201460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.201485 | orchestrator | 2025-09-19 11:50:35.201496 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-19 11:50:35.201507 | orchestrator | Friday 19 September 2025 11:48:17 +0000 (0:00:07.363) 0:00:42.511 ****** 2025-09-19 11:50:35.201519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:50:35.201535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:50:35.201553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.201565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.201611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.201623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.201642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:50:35.201653 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:50:35.201676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:50:35.201688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:50:35.201700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:50:35.201711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.201722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.201740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.201756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.201775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.201787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.201798 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:50:35.201810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.201821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.201839 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:50:35.201850 | orchestrator | 2025-09-19 11:50:35.201861 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-19 11:50:35.201872 | orchestrator | Friday 19 September 2025 11:48:18 +0000 (0:00:00.875) 0:00:43.387 ****** 2025-09-19 11:50:35.201883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:50:35.201900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:50:35.201919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.201931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:50:35.201942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.201960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:50:35.201971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.201987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.202004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.202070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.202086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.202098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.202117 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:50:35.202128 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:50:35.202140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:50:35.202152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:50:35.202169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.202188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.202200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.202218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.202229 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:50:35.202240 | orchestrator | 2025-09-19 11:50:35.202251 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-19 11:50:35.202262 | orchestrator | Friday 19 September 2025 11:48:19 +0000 (0:00:01.340) 0:00:44.728 ****** 2025-09-19 11:50:35.202273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:50:35.202285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:50:35.202731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:50:35.202799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.202823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.202834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.202845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.202862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.202882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.202894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.202905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.202923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.202934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.202946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.202962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.202980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.202991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203020 | orchestrator | 2025-09-19 11:50:35.203032 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-19 11:50:35.203043 | orchestrator | Friday 19 September 2025 11:48:27 +0000 (0:00:08.020) 0:00:52.748 ****** 2025-09-19 11:50:35.203054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:50:35.203066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:50:35.203082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:50:35.203101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203314 | orchestrator | 2025-09-19 11:50:35.203325 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-19 11:50:35.203336 | orchestrator | Friday 19 September 2025 11:48:47 +0000 (0:00:19.757) 0:01:12.505 ****** 2025-09-19 11:50:35.203346 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 11:50:35.203357 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 11:50:35.203368 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 11:50:35.203379 | orchestrator | 2025-09-19 11:50:35.203389 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-19 11:50:35.203400 | orchestrator | Friday 19 September 2025 11:48:52 +0000 (0:00:04.693) 0:01:17.198 ****** 2025-09-19 11:50:35.203411 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 11:50:35.203421 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 11:50:35.203432 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 11:50:35.203442 | orchestrator | 2025-09-19 11:50:35.203453 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-19 11:50:35.203463 | orchestrator | Friday 19 September 2025 11:48:56 +0000 (0:00:04.546) 0:01:21.745 ****** 2025-09-19 11:50:35.203475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:50:35.203491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:50:35.203516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:50:35.203528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.203562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.203627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.203659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.203678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.203690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.203701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.203723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.203735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.203758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203799 | orchestrator | 2025-09-19 11:50:35.203810 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-19 11:50:35.203821 | orchestrator | Friday 19 September 2025 11:49:00 +0000 (0:00:03.541) 0:01:25.287 ****** 2025-09-19 11:50:35.203832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:50:35.203842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:50:35.203863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:50:35.203879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.203899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.203909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.203919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.203949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.203965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.203975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.203985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.203995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.204010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.204024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204060 | orchestrator | 2025-09-19 11:50:35.204070 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 11:50:35.204079 | orchestrator | Friday 19 September 2025 11:49:03 +0000 (0:00:02.921) 0:01:28.208 ****** 2025-09-19 11:50:35.204089 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:50:35.204099 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:50:35.204109 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:50:35.204118 | orchestrator | 2025-09-19 11:50:35.204128 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-19 11:50:35.204137 | orchestrator | Friday 19 September 2025 11:49:03 +0000 (0:00:00.289) 0:01:28.497 ****** 2025-09-19 11:50:35.204147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:50:35.204157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:50:35.204173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.204187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.204204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.204215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.204225 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:50:35.204235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:50:35.204251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:50:35.204261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.204276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.204292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.204303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.204313 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:50:35.204323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:50:35.204338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:50:35.204349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.204359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.204373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.204390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:50:35.204400 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:50:35.204410 | orchestrator | 2025-09-19 11:50:35.204420 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-19 11:50:35.204429 | orchestrator | Friday 19 September 2025 11:49:04 +0000 (0:00:01.506) 0:01:30.004 ****** 2025-09-19 11:50:35.204439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:50:35.204458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:50:35.204468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:50:35.204482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:50:35.204743 | orchestrator | 2025-09-19 11:50:35.204753 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 11:50:35.204763 | orchestrator | Friday 19 September 2025 11:49:10 +0000 (0:00:05.735) 0:01:35.740 ****** 2025-09-19 11:50:35.204773 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:50:35.204782 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:50:35.204792 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:50:35.204802 | orchestrator | 2025-09-19 11:50:35.204811 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-19 11:50:35.204821 | orchestrator | Friday 19 September 2025 11:49:10 +0000 (0:00:00.278) 0:01:36.018 ****** 2025-09-19 11:50:35.204837 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-19 11:50:35.204846 | orchestrator | 2025-09-19 11:50:35.204856 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-19 11:50:35.204866 | orchestrator | Friday 19 September 2025 11:49:13 +0000 (0:00:02.451) 0:01:38.470 ****** 2025-09-19 11:50:35.204875 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:50:35.204885 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-19 11:50:35.204894 | orchestrator | 2025-09-19 11:50:35.204904 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-19 11:50:35.204913 | orchestrator | Friday 19 September 2025 11:49:16 +0000 (0:00:02.790) 0:01:41.261 ****** 2025-09-19 11:50:35.204923 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:50:35.204933 | orchestrator | 2025-09-19 11:50:35.204942 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 11:50:35.204951 | orchestrator | Friday 19 September 2025 11:49:32 +0000 (0:00:16.299) 0:01:57.560 ****** 2025-09-19 11:50:35.204961 | orchestrator | 2025-09-19 11:50:35.204970 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 11:50:35.204980 | orchestrator | Friday 19 September 2025 11:49:32 +0000 (0:00:00.189) 0:01:57.750 ****** 2025-09-19 11:50:35.204989 | orchestrator | 2025-09-19 11:50:35.204999 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 11:50:35.205008 | orchestrator | Friday 19 September 2025 11:49:32 +0000 (0:00:00.061) 0:01:57.812 ****** 2025-09-19 11:50:35.205017 | orchestrator | 2025-09-19 11:50:35.205027 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-19 11:50:35.205036 | orchestrator | Friday 19 September 2025 11:49:32 +0000 (0:00:00.062) 0:01:57.875 ****** 2025-09-19 11:50:35.205046 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:50:35.205055 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:50:35.205065 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:50:35.205074 | orchestrator | 2025-09-19 11:50:35.205084 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-19 11:50:35.205094 | orchestrator | Friday 19 September 2025 11:49:39 +0000 (0:00:07.132) 0:02:05.007 ****** 2025-09-19 11:50:35.205103 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:50:35.205113 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:50:35.205122 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:50:35.205132 | orchestrator | 2025-09-19 11:50:35.205141 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-19 11:50:35.205151 | orchestrator | Friday 19 September 2025 11:49:46 +0000 (0:00:06.189) 0:02:11.197 ****** 2025-09-19 11:50:35.205160 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:50:35.205170 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:50:35.205179 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:50:35.205189 | orchestrator | 2025-09-19 11:50:35.205198 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-19 11:50:35.205208 | orchestrator | Friday 19 September 2025 11:49:52 +0000 (0:00:06.305) 0:02:17.502 ****** 2025-09-19 11:50:35.205217 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:50:35.205227 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:50:35.205236 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:50:35.205245 | orchestrator | 2025-09-19 11:50:35.205255 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-19 11:50:35.205264 | orchestrator | Friday 19 September 2025 11:50:04 +0000 (0:00:12.229) 0:02:29.732 ****** 2025-09-19 11:50:35.205274 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:50:35.205283 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:50:35.205293 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:50:35.205302 | orchestrator | 2025-09-19 11:50:35.205312 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-19 11:50:35.205321 | orchestrator | Friday 19 September 2025 11:50:12 +0000 (0:00:07.787) 0:02:37.520 ****** 2025-09-19 11:50:35.205331 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:50:35.205346 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:50:35.205356 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:50:35.205365 | orchestrator | 2025-09-19 11:50:35.205374 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-19 11:50:35.205384 | orchestrator | Friday 19 September 2025 11:50:24 +0000 (0:00:12.282) 0:02:49.803 ****** 2025-09-19 11:50:35.205393 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:50:35.205403 | orchestrator | 2025-09-19 11:50:35.205417 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:50:35.205427 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 11:50:35.205437 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:50:35.205447 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:50:35.205457 | orchestrator | 2025-09-19 11:50:35.205466 | orchestrator | 2025-09-19 11:50:35.205480 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:50:35.205490 | orchestrator | Friday 19 September 2025 11:50:32 +0000 (0:00:07.479) 0:02:57.282 ****** 2025-09-19 11:50:35.205500 | orchestrator | =============================================================================== 2025-09-19 11:50:35.205510 | orchestrator | designate : Copying over designate.conf -------------------------------- 19.76s 2025-09-19 11:50:35.205519 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.30s 2025-09-19 11:50:35.205529 | orchestrator | designate : Restart designate-worker container ------------------------- 12.28s 2025-09-19 11:50:35.205538 | orchestrator | designate : Restart designate-producer container ----------------------- 12.23s 2025-09-19 11:50:35.205548 | orchestrator | designate : Copying over config.json files for services ----------------- 8.01s 2025-09-19 11:50:35.205557 | orchestrator | designate : Restart designate-mdns container ---------------------------- 7.79s 2025-09-19 11:50:35.205567 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.48s 2025-09-19 11:50:35.205597 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.36s 2025-09-19 11:50:35.205607 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 7.13s 2025-09-19 11:50:35.205617 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.88s 2025-09-19 11:50:35.205626 | orchestrator | designate : Restart designate-central container ------------------------- 6.31s 2025-09-19 11:50:35.205636 | orchestrator | designate : Restart designate-api container ----------------------------- 6.19s 2025-09-19 11:50:35.205646 | orchestrator | designate : Check designate containers ---------------------------------- 5.74s 2025-09-19 11:50:35.205656 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.77s 2025-09-19 11:50:35.205665 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.69s 2025-09-19 11:50:35.205675 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.56s 2025-09-19 11:50:35.205684 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.55s 2025-09-19 11:50:35.205694 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.08s 2025-09-19 11:50:35.205704 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.56s 2025-09-19 11:50:35.205713 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.54s 2025-09-19 11:50:35.205723 | orchestrator | 2025-09-19 11:50:35 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:35.205733 | orchestrator | 2025-09-19 11:50:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:38.245484 | orchestrator | 2025-09-19 11:50:38 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:50:38.245712 | orchestrator | 2025-09-19 11:50:38 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:50:38.247556 | orchestrator | 2025-09-19 11:50:38 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:38.248172 | orchestrator | 2025-09-19 11:50:38 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:38.248194 | orchestrator | 2025-09-19 11:50:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:41.273027 | orchestrator | 2025-09-19 11:50:41 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:50:41.273384 | orchestrator | 2025-09-19 11:50:41 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:50:41.274520 | orchestrator | 2025-09-19 11:50:41 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:41.275138 | orchestrator | 2025-09-19 11:50:41 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:41.275184 | orchestrator | 2025-09-19 11:50:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:44.299298 | orchestrator | 2025-09-19 11:50:44 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:50:44.301135 | orchestrator | 2025-09-19 11:50:44 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:50:44.302674 | orchestrator | 2025-09-19 11:50:44 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:44.304322 | orchestrator | 2025-09-19 11:50:44 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:44.304486 | orchestrator | 2025-09-19 11:50:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:47.344352 | orchestrator | 2025-09-19 11:50:47 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state STARTED 2025-09-19 11:50:47.346871 | orchestrator | 2025-09-19 11:50:47 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:50:47.349652 | orchestrator | 2025-09-19 11:50:47 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:47.351914 | orchestrator | 2025-09-19 11:50:47 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:47.352031 | orchestrator | 2025-09-19 11:50:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:50.403840 | orchestrator | 2025-09-19 11:50:50 | INFO  | Task d06fb434-b0ca-4f4f-9e5c-7695bf496b95 is in state SUCCESS 2025-09-19 11:50:50.408271 | orchestrator | 2025-09-19 11:50:50 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:50:50.408318 | orchestrator | 2025-09-19 11:50:50 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:50:50.409023 | orchestrator | 2025-09-19 11:50:50 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:50.412980 | orchestrator | 2025-09-19 11:50:50 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:50.413030 | orchestrator | 2025-09-19 11:50:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:53.440257 | orchestrator | 2025-09-19 11:50:53 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:50:53.442184 | orchestrator | 2025-09-19 11:50:53 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:50:53.445123 | orchestrator | 2025-09-19 11:50:53 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:53.447436 | orchestrator | 2025-09-19 11:50:53 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:53.447698 | orchestrator | 2025-09-19 11:50:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:56.488375 | orchestrator | 2025-09-19 11:50:56 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:50:56.489807 | orchestrator | 2025-09-19 11:50:56 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:50:56.492233 | orchestrator | 2025-09-19 11:50:56 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:56.495809 | orchestrator | 2025-09-19 11:50:56 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:56.495858 | orchestrator | 2025-09-19 11:50:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:59.546917 | orchestrator | 2025-09-19 11:50:59 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:50:59.547741 | orchestrator | 2025-09-19 11:50:59 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:50:59.550968 | orchestrator | 2025-09-19 11:50:59 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:50:59.554655 | orchestrator | 2025-09-19 11:50:59 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state STARTED 2025-09-19 11:50:59.555221 | orchestrator | 2025-09-19 11:50:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:02.601020 | orchestrator | 2025-09-19 11:51:02 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:51:02.602646 | orchestrator | 2025-09-19 11:51:02 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:02.604444 | orchestrator | 2025-09-19 11:51:02 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:02.608501 | orchestrator | 2025-09-19 11:51:02 | INFO  | Task 3e0e1223-0ea9-4e64-acbb-d57f1549cab3 is in state SUCCESS 2025-09-19 11:51:02.610902 | orchestrator | 2025-09-19 11:51:02.610939 | orchestrator | 2025-09-19 11:51:02.610952 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-19 11:51:02.610964 | orchestrator | 2025-09-19 11:51:02.610976 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-19 11:51:02.610989 | orchestrator | Friday 19 September 2025 11:49:29 +0000 (0:00:00.108) 0:00:00.108 ****** 2025-09-19 11:51:02.611001 | orchestrator | changed: [localhost] 2025-09-19 11:51:02.611014 | orchestrator | 2025-09-19 11:51:02.611026 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-19 11:51:02.611055 | orchestrator | Friday 19 September 2025 11:49:29 +0000 (0:00:00.811) 0:00:00.919 ****** 2025-09-19 11:51:02.611067 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-09-19 11:51:02.611078 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2025-09-19 11:51:02.611090 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (1 retries left). 2025-09-19 11:51:02.611103 | orchestrator | fatal: [localhost]: FAILED! => {"attempts": 3, "changed": false, "dest": "/share/ironic/ironic/ironic-agent.initramfs", "elapsed": 10, "msg": "Request failed: ", "url": "https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/ipa-centos9-stable-2024.2.initramfs.sha256"} 2025-09-19 11:51:02.611117 | orchestrator | 2025-09-19 11:51:02.611129 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:51:02.611141 | orchestrator | localhost : ok=1  changed=1  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-19 11:51:02.611525 | orchestrator | 2025-09-19 11:51:02.611567 | orchestrator | 2025-09-19 11:51:02.611579 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:51:02.611590 | orchestrator | Friday 19 September 2025 11:50:48 +0000 (0:01:18.238) 0:01:19.158 ****** 2025-09-19 11:51:02.611601 | orchestrator | =============================================================================== 2025-09-19 11:51:02.611612 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 78.24s 2025-09-19 11:51:02.611623 | orchestrator | Ensure the destination directory exists --------------------------------- 0.81s 2025-09-19 11:51:02.611633 | orchestrator | 2025-09-19 11:51:02.611644 | orchestrator | 2025-09-19 11:51:02.611655 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:51:02.611665 | orchestrator | 2025-09-19 11:51:02.611676 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:51:02.611687 | orchestrator | Friday 19 September 2025 11:46:55 +0000 (0:00:00.271) 0:00:00.271 ****** 2025-09-19 11:51:02.611698 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:51:02.611709 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:51:02.611720 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:51:02.611731 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:51:02.611742 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:51:02.611753 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:51:02.611764 | orchestrator | 2025-09-19 11:51:02.611775 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:51:02.611786 | orchestrator | Friday 19 September 2025 11:46:56 +0000 (0:00:00.670) 0:00:00.941 ****** 2025-09-19 11:51:02.611797 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-19 11:51:02.611808 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-19 11:51:02.611819 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-19 11:51:02.611830 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-19 11:51:02.612044 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-19 11:51:02.612057 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-19 11:51:02.612068 | orchestrator | 2025-09-19 11:51:02.612079 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-19 11:51:02.612090 | orchestrator | 2025-09-19 11:51:02.612101 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 11:51:02.612113 | orchestrator | Friday 19 September 2025 11:46:57 +0000 (0:00:00.583) 0:00:01.525 ****** 2025-09-19 11:51:02.612124 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:51:02.612135 | orchestrator | 2025-09-19 11:51:02.612146 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-19 11:51:02.612157 | orchestrator | Friday 19 September 2025 11:46:58 +0000 (0:00:01.178) 0:00:02.703 ****** 2025-09-19 11:51:02.612168 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:51:02.612179 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:51:02.612189 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:51:02.612200 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:51:02.612211 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:51:02.612222 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:51:02.612233 | orchestrator | 2025-09-19 11:51:02.612244 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-19 11:51:02.612255 | orchestrator | Friday 19 September 2025 11:46:59 +0000 (0:00:01.333) 0:00:04.037 ****** 2025-09-19 11:51:02.612265 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:51:02.612276 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:51:02.612287 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:51:02.612298 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:51:02.612309 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:51:02.612320 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:51:02.612330 | orchestrator | 2025-09-19 11:51:02.612353 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-19 11:51:02.612364 | orchestrator | Friday 19 September 2025 11:47:00 +0000 (0:00:01.099) 0:00:05.136 ****** 2025-09-19 11:51:02.612376 | orchestrator | ok: [testbed-node-0] => { 2025-09-19 11:51:02.612387 | orchestrator |  "changed": false, 2025-09-19 11:51:02.612398 | orchestrator |  "msg": "All assertions passed" 2025-09-19 11:51:02.612409 | orchestrator | } 2025-09-19 11:51:02.612458 | orchestrator | ok: [testbed-node-1] => { 2025-09-19 11:51:02.612471 | orchestrator |  "changed": false, 2025-09-19 11:51:02.612481 | orchestrator |  "msg": "All assertions passed" 2025-09-19 11:51:02.612492 | orchestrator | } 2025-09-19 11:51:02.612503 | orchestrator | ok: [testbed-node-2] => { 2025-09-19 11:51:02.612514 | orchestrator |  "changed": false, 2025-09-19 11:51:02.612525 | orchestrator |  "msg": "All assertions passed" 2025-09-19 11:51:02.612536 | orchestrator | } 2025-09-19 11:51:02.612567 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 11:51:02.612578 | orchestrator |  "changed": false, 2025-09-19 11:51:02.612589 | orchestrator |  "msg": "All assertions passed" 2025-09-19 11:51:02.612600 | orchestrator | } 2025-09-19 11:51:02.612619 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 11:51:02.612630 | orchestrator |  "changed": false, 2025-09-19 11:51:02.612641 | orchestrator |  "msg": "All assertions passed" 2025-09-19 11:51:02.612652 | orchestrator | } 2025-09-19 11:51:02.612663 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 11:51:02.612674 | orchestrator |  "changed": false, 2025-09-19 11:51:02.612687 | orchestrator |  "msg": "All assertions passed" 2025-09-19 11:51:02.612699 | orchestrator | } 2025-09-19 11:51:02.612711 | orchestrator | 2025-09-19 11:51:02.612724 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-19 11:51:02.612737 | orchestrator | Friday 19 September 2025 11:47:01 +0000 (0:00:00.696) 0:00:05.833 ****** 2025-09-19 11:51:02.612749 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.612761 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.612773 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.612785 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.612798 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.612810 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.612822 | orchestrator | 2025-09-19 11:51:02.612834 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-19 11:51:02.612847 | orchestrator | Friday 19 September 2025 11:47:01 +0000 (0:00:00.545) 0:00:06.378 ****** 2025-09-19 11:51:02.612859 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-19 11:51:02.612871 | orchestrator | 2025-09-19 11:51:02.612883 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-19 11:51:02.612895 | orchestrator | Friday 19 September 2025 11:47:05 +0000 (0:00:03.632) 0:00:10.011 ****** 2025-09-19 11:51:02.612907 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-19 11:51:02.612921 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-19 11:51:02.612933 | orchestrator | 2025-09-19 11:51:02.612945 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-19 11:51:02.612957 | orchestrator | Friday 19 September 2025 11:47:12 +0000 (0:00:06.716) 0:00:16.728 ****** 2025-09-19 11:51:02.612969 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:51:02.612981 | orchestrator | 2025-09-19 11:51:02.612994 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-19 11:51:02.613006 | orchestrator | Friday 19 September 2025 11:47:15 +0000 (0:00:03.212) 0:00:19.940 ****** 2025-09-19 11:51:02.613018 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:51:02.613030 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-19 11:51:02.613042 | orchestrator | 2025-09-19 11:51:02.613054 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-19 11:51:02.613074 | orchestrator | Friday 19 September 2025 11:47:19 +0000 (0:00:03.652) 0:00:23.593 ****** 2025-09-19 11:51:02.613085 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:51:02.613096 | orchestrator | 2025-09-19 11:51:02.613106 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-19 11:51:02.613117 | orchestrator | Friday 19 September 2025 11:47:22 +0000 (0:00:03.066) 0:00:26.660 ****** 2025-09-19 11:51:02.613128 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-19 11:51:02.613139 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-19 11:51:02.613150 | orchestrator | 2025-09-19 11:51:02.613161 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 11:51:02.613172 | orchestrator | Friday 19 September 2025 11:47:30 +0000 (0:00:08.085) 0:00:34.746 ****** 2025-09-19 11:51:02.613183 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.613194 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.613205 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.613216 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.613227 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.613238 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.613248 | orchestrator | 2025-09-19 11:51:02.613259 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-19 11:51:02.613270 | orchestrator | Friday 19 September 2025 11:47:31 +0000 (0:00:00.799) 0:00:35.545 ****** 2025-09-19 11:51:02.613281 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.613292 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.613303 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.613313 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.613324 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.613335 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.613346 | orchestrator | 2025-09-19 11:51:02.613357 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-19 11:51:02.613368 | orchestrator | Friday 19 September 2025 11:47:33 +0000 (0:00:02.326) 0:00:37.872 ****** 2025-09-19 11:51:02.613379 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:51:02.613390 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:51:02.613401 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:51:02.613412 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:51:02.613422 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:51:02.613433 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:51:02.613444 | orchestrator | 2025-09-19 11:51:02.613455 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-19 11:51:02.613466 | orchestrator | Friday 19 September 2025 11:47:35 +0000 (0:00:02.147) 0:00:40.020 ****** 2025-09-19 11:51:02.613477 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.613488 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.613499 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.613571 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.613586 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.613597 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.613608 | orchestrator | 2025-09-19 11:51:02.613619 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-19 11:51:02.613630 | orchestrator | Friday 19 September 2025 11:47:38 +0000 (0:00:02.616) 0:00:42.636 ****** 2025-09-19 11:51:02.613650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.613673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.613686 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:51:02.613698 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:51:02.613744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.613763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:51:02.613781 | orchestrator | 2025-09-19 11:51:02.613793 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-19 11:51:02.613804 | orchestrator | Friday 19 September 2025 11:47:41 +0000 (0:00:02.913) 0:00:45.549 ****** 2025-09-19 11:51:02.613815 | orchestrator | [WARNING]: Skipped 2025-09-19 11:51:02.613827 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-19 11:51:02.613838 | orchestrator | due to this access issue: 2025-09-19 11:51:02.613849 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-19 11:51:02.613860 | orchestrator | a directory 2025-09-19 11:51:02.613871 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:51:02.613882 | orchestrator | 2025-09-19 11:51:02.613893 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 11:51:02.613904 | orchestrator | Friday 19 September 2025 11:47:42 +0000 (0:00:00.914) 0:00:46.463 ****** 2025-09-19 11:51:02.613915 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:51:02.613927 | orchestrator | 2025-09-19 11:51:02.613938 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-19 11:51:02.613949 | orchestrator | Friday 19 September 2025 11:47:43 +0000 (0:00:01.300) 0:00:47.764 ****** 2025-09-19 11:51:02.613960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.613973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.614071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.614098 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:51:02.614110 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:51:02.614121 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:51:02.614133 | orchestrator | 2025-09-19 11:51:02.614144 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-19 11:51:02.614155 | orchestrator | Friday 19 September 2025 11:47:46 +0000 (0:00:03.017) 0:00:50.782 ****** 2025-09-19 11:51:02.614166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.614178 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.614229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.614250 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.614262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.614274 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.614285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.614296 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.614307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.614319 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.614330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.614347 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.614359 | orchestrator | 2025-09-19 11:51:02.614400 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-19 11:51:02.614414 | orchestrator | Friday 19 September 2025 11:47:49 +0000 (0:00:02.672) 0:00:53.454 ****** 2025-09-19 11:51:02.614436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.614448 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.614460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.614471 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.614483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.614494 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.614505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.614523 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.614567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.614594 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.614612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.614630 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.614643 | orchestrator | 2025-09-19 11:51:02.614661 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-19 11:51:02.614679 | orchestrator | Friday 19 September 2025 11:47:51 +0000 (0:00:02.793) 0:00:56.248 ****** 2025-09-19 11:51:02.614697 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.614715 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.614731 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.614749 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.614766 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.614784 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.614800 | orchestrator | 2025-09-19 11:51:02.614815 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-19 11:51:02.614831 | orchestrator | Friday 19 September 2025 11:47:53 +0000 (0:00:02.090) 0:00:58.338 ****** 2025-09-19 11:51:02.614846 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.614861 | orchestrator | 2025-09-19 11:51:02.614877 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-19 11:51:02.614893 | orchestrator | Friday 19 September 2025 11:47:54 +0000 (0:00:00.133) 0:00:58.472 ****** 2025-09-19 11:51:02.614909 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.614924 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.614940 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.614957 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.614972 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.614987 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.615003 | orchestrator | 2025-09-19 11:51:02.615018 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-19 11:51:02.615034 | orchestrator | Friday 19 September 2025 11:47:54 +0000 (0:00:00.803) 0:00:59.275 ****** 2025-09-19 11:51:02.615050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.615081 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.615099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.615116 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.615159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.615178 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.615195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.615212 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.615230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.615259 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.615276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.615294 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.615311 | orchestrator | 2025-09-19 11:51:02.615329 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-19 11:51:02.615347 | orchestrator | Friday 19 September 2025 11:47:56 +0000 (0:00:02.116) 0:01:01.392 ****** 2025-09-19 11:51:02.615378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.615407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.615420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.615440 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:51:02.615452 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:51:02.615471 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:51:02.615483 | orchestrator | 2025-09-19 11:51:02.615494 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-19 11:51:02.615510 | orchestrator | Friday 19 September 2025 11:48:00 +0000 (0:00:03.662) 0:01:05.054 ****** 2025-09-19 11:51:02.615522 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:51:02.615533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.615579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:51:02.615592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.615617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.615629 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:51:02.615640 | orchestrator | 2025-09-19 11:51:02.615651 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-19 11:51:02.615663 | orchestrator | Friday 19 September 2025 11:48:07 +0000 (0:00:06.743) 0:01:11.798 ****** 2025-09-19 11:51:02.615674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.615692 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.615704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.615715 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.615726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.615738 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.615761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.615773 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.615785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.615802 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.615814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.615825 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.615836 | orchestrator | 2025-09-19 11:51:02.615847 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-19 11:51:02.615857 | orchestrator | Friday 19 September 2025 11:48:10 +0000 (0:00:02.986) 0:01:14.784 ****** 2025-09-19 11:51:02.615868 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.615879 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:02.615890 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.615901 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.615911 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:51:02.615922 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:51:02.615933 | orchestrator | 2025-09-19 11:51:02.615944 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-19 11:51:02.615954 | orchestrator | Friday 19 September 2025 11:48:14 +0000 (0:00:03.683) 0:01:18.468 ****** 2025-09-19 11:51:02.615966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.615977 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.616000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.616012 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.616023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.616045 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.616057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.616068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.616080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.616092 | orchestrator | 2025-09-19 11:51:02.616103 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-19 11:51:02.616114 | orchestrator | Friday 19 September 2025 11:48:18 +0000 (0:00:04.835) 0:01:23.303 ****** 2025-09-19 11:51:02.616130 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.616141 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.616152 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.616163 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.616174 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.616184 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.616195 | orchestrator | 2025-09-19 11:51:02.616206 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-19 11:51:02.616217 | orchestrator | Friday 19 September 2025 11:48:22 +0000 (0:00:03.468) 0:01:26.771 ****** 2025-09-19 11:51:02.616240 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.616252 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.616263 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.616273 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.616284 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.616295 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.616306 | orchestrator | 2025-09-19 11:51:02.616317 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-19 11:51:02.616328 | orchestrator | Friday 19 September 2025 11:48:25 +0000 (0:00:02.805) 0:01:29.577 ****** 2025-09-19 11:51:02.616339 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.616349 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.616360 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.616371 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.616381 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.616392 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.616403 | orchestrator | 2025-09-19 11:51:02.616414 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-19 11:51:02.616425 | orchestrator | Friday 19 September 2025 11:48:28 +0000 (0:00:03.008) 0:01:32.588 ****** 2025-09-19 11:51:02.616436 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.616446 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.616457 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.616468 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.616479 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.616489 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.616500 | orchestrator | 2025-09-19 11:51:02.616511 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-19 11:51:02.616522 | orchestrator | Friday 19 September 2025 11:48:32 +0000 (0:00:04.066) 0:01:36.654 ****** 2025-09-19 11:51:02.616533 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.616595 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.616607 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.616618 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.616629 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.616639 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.616650 | orchestrator | 2025-09-19 11:51:02.616661 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-19 11:51:02.616672 | orchestrator | Friday 19 September 2025 11:48:35 +0000 (0:00:03.149) 0:01:39.804 ****** 2025-09-19 11:51:02.616683 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.616693 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.616704 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.616715 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.616726 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.616736 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.616747 | orchestrator | 2025-09-19 11:51:02.616758 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-19 11:51:02.616768 | orchestrator | Friday 19 September 2025 11:48:38 +0000 (0:00:02.949) 0:01:42.753 ****** 2025-09-19 11:51:02.616777 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 11:51:02.616787 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.616797 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 11:51:02.616807 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.616816 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 11:51:02.616826 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.616836 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 11:51:02.616845 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.616862 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 11:51:02.616872 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.616881 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 11:51:02.616890 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.616900 | orchestrator | 2025-09-19 11:51:02.616910 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-19 11:51:02.616919 | orchestrator | Friday 19 September 2025 11:48:40 +0000 (0:00:02.162) 0:01:44.916 ****** 2025-09-19 11:51:02.616936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.616946 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.616961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.616972 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.616982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.616992 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.617002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.617020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.617030 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.617040 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.617056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.617066 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.617076 | orchestrator | 2025-09-19 11:51:02.617086 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-19 11:51:02.617100 | orchestrator | Friday 19 September 2025 11:48:43 +0000 (0:00:02.665) 0:01:47.581 ****** 2025-09-19 11:51:02.617110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.617121 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.617131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.617149 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.617160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.617169 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.617180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.617190 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.617210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.617221 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.617231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.617242 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.617251 | orchestrator | 2025-09-19 11:51:02.617261 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-19 11:51:02.617271 | orchestrator | Friday 19 September 2025 11:48:45 +0000 (0:00:02.532) 0:01:50.113 ****** 2025-09-19 11:51:02.617280 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.617290 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.617306 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.617316 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.617326 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.617335 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.617345 | orchestrator | 2025-09-19 11:51:02.617355 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-19 11:51:02.617365 | orchestrator | Friday 19 September 2025 11:48:47 +0000 (0:00:02.107) 0:01:52.221 ****** 2025-09-19 11:51:02.617375 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.617384 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.617394 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.617403 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:51:02.617413 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:51:02.617423 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:51:02.617432 | orchestrator | 2025-09-19 11:51:02.617442 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-19 11:51:02.617452 | orchestrator | Friday 19 September 2025 11:48:51 +0000 (0:00:04.083) 0:01:56.305 ****** 2025-09-19 11:51:02.617461 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.617471 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.617480 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.617490 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.617500 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.617509 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.617518 | orchestrator | 2025-09-19 11:51:02.617528 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-19 11:51:02.617577 | orchestrator | Friday 19 September 2025 11:48:55 +0000 (0:00:03.629) 0:01:59.934 ****** 2025-09-19 11:51:02.617589 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.617599 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.617608 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.617618 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.617627 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.617637 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.617646 | orchestrator | 2025-09-19 11:51:02.617656 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-19 11:51:02.617666 | orchestrator | Friday 19 September 2025 11:48:58 +0000 (0:00:02.634) 0:02:02.568 ****** 2025-09-19 11:51:02.617675 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.617685 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.617694 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.617703 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.617713 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.617722 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.617732 | orchestrator | 2025-09-19 11:51:02.617741 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-19 11:51:02.617751 | orchestrator | Friday 19 September 2025 11:49:01 +0000 (0:00:03.108) 0:02:05.677 ****** 2025-09-19 11:51:02.617760 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.617770 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.617779 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.617789 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.617798 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.617808 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.617817 | orchestrator | 2025-09-19 11:51:02.617827 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-19 11:51:02.617837 | orchestrator | Friday 19 September 2025 11:49:03 +0000 (0:00:02.126) 0:02:07.804 ****** 2025-09-19 11:51:02.617846 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.617856 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.617871 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.617881 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.617889 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.617902 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.617910 | orchestrator | 2025-09-19 11:51:02.617918 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-19 11:51:02.617926 | orchestrator | Friday 19 September 2025 11:49:06 +0000 (0:00:03.035) 0:02:10.839 ****** 2025-09-19 11:51:02.617934 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.617941 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.617949 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.617961 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.617969 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.617977 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.617985 | orchestrator | 2025-09-19 11:51:02.617993 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-19 11:51:02.618001 | orchestrator | Friday 19 September 2025 11:49:08 +0000 (0:00:02.515) 0:02:13.355 ****** 2025-09-19 11:51:02.618008 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.618041 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.618051 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.618059 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.618067 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.618075 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.618083 | orchestrator | 2025-09-19 11:51:02.618091 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-19 11:51:02.618099 | orchestrator | Friday 19 September 2025 11:49:10 +0000 (0:00:02.036) 0:02:15.391 ****** 2025-09-19 11:51:02.618107 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 11:51:02.618115 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.618123 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 11:51:02.618131 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.618139 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 11:51:02.618147 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.618154 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 11:51:02.618162 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.618170 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 11:51:02.618178 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.618186 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 11:51:02.618194 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.618202 | orchestrator | 2025-09-19 11:51:02.618210 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-19 11:51:02.618218 | orchestrator | Friday 19 September 2025 11:49:14 +0000 (0:00:03.358) 0:02:18.749 ****** 2025-09-19 11:51:02.618226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.618234 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.618248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.618256 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.618274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.618283 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.618291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.618299 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.618307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:51:02.618316 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.618324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:51:02.618337 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.618345 | orchestrator | 2025-09-19 11:51:02.618353 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-19 11:51:02.618361 | orchestrator | Friday 19 September 2025 11:49:16 +0000 (0:00:01.980) 0:02:20.730 ****** 2025-09-19 11:51:02.618373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.618388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.618397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:51:02.618406 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:51:02.618419 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:51:02.618432 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:51:02.618440 | orchestrator | 2025-09-19 11:51:02.618448 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 11:51:02.618456 | orchestrator | Friday 19 September 2025 11:49:19 +0000 (0:00:03.251) 0:02:23.982 ****** 2025-09-19 11:51:02.618464 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:02.618472 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:02.618480 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:02.618488 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:51:02.618496 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:51:02.618507 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:51:02.618515 | orchestrator | 2025-09-19 11:51:02.618523 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-19 11:51:02.618531 | orchestrator | Friday 19 September 2025 11:49:20 +0000 (0:00:00.508) 0:02:24.490 ****** 2025-09-19 11:51:02.618555 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:02.618564 | orchestrator | 2025-09-19 11:51:02.618572 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-19 11:51:02.618580 | orchestrator | Friday 19 September 2025 11:49:22 +0000 (0:00:02.212) 0:02:26.702 ****** 2025-09-19 11:51:02.618588 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:02.618596 | orchestrator | 2025-09-19 11:51:02.618604 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-19 11:51:02.618611 | orchestrator | Friday 19 September 2025 11:49:24 +0000 (0:00:02.393) 0:02:29.095 ****** 2025-09-19 11:51:02.618619 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:02.618627 | orchestrator | 2025-09-19 11:51:02.618635 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 11:51:02.618643 | orchestrator | Friday 19 September 2025 11:50:06 +0000 (0:00:41.984) 0:03:11.080 ****** 2025-09-19 11:51:02.618651 | orchestrator | 2025-09-19 11:51:02.618659 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 11:51:02.618667 | orchestrator | Friday 19 September 2025 11:50:06 +0000 (0:00:00.199) 0:03:11.280 ****** 2025-09-19 11:51:02.618674 | orchestrator | 2025-09-19 11:51:02.618682 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 11:51:02.618690 | orchestrator | Friday 19 September 2025 11:50:07 +0000 (0:00:00.752) 0:03:12.032 ****** 2025-09-19 11:51:02.618698 | orchestrator | 2025-09-19 11:51:02.618706 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 11:51:02.618720 | orchestrator | Friday 19 September 2025 11:50:07 +0000 (0:00:00.069) 0:03:12.102 ****** 2025-09-19 11:51:02.618728 | orchestrator | 2025-09-19 11:51:02.618736 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 11:51:02.618743 | orchestrator | Friday 19 September 2025 11:50:07 +0000 (0:00:00.087) 0:03:12.189 ****** 2025-09-19 11:51:02.618751 | orchestrator | 2025-09-19 11:51:02.618759 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 11:51:02.618767 | orchestrator | Friday 19 September 2025 11:50:07 +0000 (0:00:00.073) 0:03:12.263 ****** 2025-09-19 11:51:02.618774 | orchestrator | 2025-09-19 11:51:02.618782 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-19 11:51:02.618790 | orchestrator | Friday 19 September 2025 11:50:07 +0000 (0:00:00.062) 0:03:12.325 ****** 2025-09-19 11:51:02.618798 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:02.618806 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:51:02.618813 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:51:02.618821 | orchestrator | 2025-09-19 11:51:02.618829 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-19 11:51:02.618837 | orchestrator | Friday 19 September 2025 11:50:35 +0000 (0:00:27.474) 0:03:39.799 ****** 2025-09-19 11:51:02.618844 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:51:02.618852 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:51:02.618860 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:51:02.618868 | orchestrator | 2025-09-19 11:51:02.618876 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:51:02.618884 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 11:51:02.618893 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-19 11:51:02.618901 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-19 11:51:02.618909 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 11:51:02.618917 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 11:51:02.618925 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 11:51:02.618933 | orchestrator | 2025-09-19 11:51:02.618940 | orchestrator | 2025-09-19 11:51:02.618948 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:51:02.618956 | orchestrator | Friday 19 September 2025 11:50:59 +0000 (0:00:24.299) 0:04:04.099 ****** 2025-09-19 11:51:02.618964 | orchestrator | =============================================================================== 2025-09-19 11:51:02.618972 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.98s 2025-09-19 11:51:02.618980 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.47s 2025-09-19 11:51:02.618988 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 24.30s 2025-09-19 11:51:02.619000 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.09s 2025-09-19 11:51:02.619008 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.74s 2025-09-19 11:51:02.619016 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.72s 2025-09-19 11:51:02.619024 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.84s 2025-09-19 11:51:02.619032 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.08s 2025-09-19 11:51:02.619049 | orchestrator | neutron : Copying over mlnx_agent.ini ----------------------------------- 4.07s 2025-09-19 11:51:02.619057 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.68s 2025-09-19 11:51:02.619065 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.66s 2025-09-19 11:51:02.619073 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.65s 2025-09-19 11:51:02.619081 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.63s 2025-09-19 11:51:02.619089 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.63s 2025-09-19 11:51:02.619096 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 3.47s 2025-09-19 11:51:02.619104 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.36s 2025-09-19 11:51:02.619112 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.25s 2025-09-19 11:51:02.619120 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.21s 2025-09-19 11:51:02.619128 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 3.15s 2025-09-19 11:51:02.619136 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.11s 2025-09-19 11:51:02.619143 | orchestrator | 2025-09-19 11:51:02 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:02.619152 | orchestrator | 2025-09-19 11:51:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:05.657839 | orchestrator | 2025-09-19 11:51:05 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:51:05.659947 | orchestrator | 2025-09-19 11:51:05 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:05.661766 | orchestrator | 2025-09-19 11:51:05 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:05.667913 | orchestrator | 2025-09-19 11:51:05 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:05.668183 | orchestrator | 2025-09-19 11:51:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:08.695795 | orchestrator | 2025-09-19 11:51:08 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:51:08.696829 | orchestrator | 2025-09-19 11:51:08 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:08.697490 | orchestrator | 2025-09-19 11:51:08 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:08.699441 | orchestrator | 2025-09-19 11:51:08 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:08.699598 | orchestrator | 2025-09-19 11:51:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:11.729923 | orchestrator | 2025-09-19 11:51:11 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:51:11.732850 | orchestrator | 2025-09-19 11:51:11 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:11.733807 | orchestrator | 2025-09-19 11:51:11 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:11.735472 | orchestrator | 2025-09-19 11:51:11 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:11.735579 | orchestrator | 2025-09-19 11:51:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:14.770934 | orchestrator | 2025-09-19 11:51:14 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:51:14.771054 | orchestrator | 2025-09-19 11:51:14 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:14.773560 | orchestrator | 2025-09-19 11:51:14 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:14.774363 | orchestrator | 2025-09-19 11:51:14 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:14.774392 | orchestrator | 2025-09-19 11:51:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:17.809738 | orchestrator | 2025-09-19 11:51:17 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:51:17.811269 | orchestrator | 2025-09-19 11:51:17 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:17.812467 | orchestrator | 2025-09-19 11:51:17 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:17.813424 | orchestrator | 2025-09-19 11:51:17 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:17.813452 | orchestrator | 2025-09-19 11:51:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:20.853692 | orchestrator | 2025-09-19 11:51:20 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:51:20.855358 | orchestrator | 2025-09-19 11:51:20 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:20.857585 | orchestrator | 2025-09-19 11:51:20 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:20.860220 | orchestrator | 2025-09-19 11:51:20 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:20.860492 | orchestrator | 2025-09-19 11:51:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:23.898996 | orchestrator | 2025-09-19 11:51:23 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:51:23.900350 | orchestrator | 2025-09-19 11:51:23 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:23.901936 | orchestrator | 2025-09-19 11:51:23 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:23.903562 | orchestrator | 2025-09-19 11:51:23 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:23.903614 | orchestrator | 2025-09-19 11:51:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:26.949106 | orchestrator | 2025-09-19 11:51:26 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:51:26.950147 | orchestrator | 2025-09-19 11:51:26 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:26.951732 | orchestrator | 2025-09-19 11:51:26 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:26.954820 | orchestrator | 2025-09-19 11:51:26 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:26.955157 | orchestrator | 2025-09-19 11:51:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:29.995489 | orchestrator | 2025-09-19 11:51:29 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:51:29.997621 | orchestrator | 2025-09-19 11:51:30 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:29.999636 | orchestrator | 2025-09-19 11:51:30 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:30.001622 | orchestrator | 2025-09-19 11:51:30 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:30.001671 | orchestrator | 2025-09-19 11:51:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:33.045442 | orchestrator | 2025-09-19 11:51:33 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:51:33.046262 | orchestrator | 2025-09-19 11:51:33 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:33.046983 | orchestrator | 2025-09-19 11:51:33 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:33.048924 | orchestrator | 2025-09-19 11:51:33 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:33.048957 | orchestrator | 2025-09-19 11:51:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:36.077852 | orchestrator | 2025-09-19 11:51:36 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:51:36.080129 | orchestrator | 2025-09-19 11:51:36 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:36.082835 | orchestrator | 2025-09-19 11:51:36 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:36.085324 | orchestrator | 2025-09-19 11:51:36 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:36.085368 | orchestrator | 2025-09-19 11:51:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:39.119867 | orchestrator | 2025-09-19 11:51:39 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:51:39.121842 | orchestrator | 2025-09-19 11:51:39 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:39.124946 | orchestrator | 2025-09-19 11:51:39 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:39.126692 | orchestrator | 2025-09-19 11:51:39 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:39.126749 | orchestrator | 2025-09-19 11:51:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:42.155164 | orchestrator | 2025-09-19 11:51:42 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:51:42.155546 | orchestrator | 2025-09-19 11:51:42 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:42.156549 | orchestrator | 2025-09-19 11:51:42 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:42.158140 | orchestrator | 2025-09-19 11:51:42 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:42.158176 | orchestrator | 2025-09-19 11:51:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:45.216270 | orchestrator | 2025-09-19 11:51:45 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:51:45.218128 | orchestrator | 2025-09-19 11:51:45 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:45.218819 | orchestrator | 2025-09-19 11:51:45 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:45.219758 | orchestrator | 2025-09-19 11:51:45 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:45.219789 | orchestrator | 2025-09-19 11:51:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:48.258333 | orchestrator | 2025-09-19 11:51:48 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state STARTED 2025-09-19 11:51:48.259627 | orchestrator | 2025-09-19 11:51:48 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:48.261356 | orchestrator | 2025-09-19 11:51:48 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:48.262957 | orchestrator | 2025-09-19 11:51:48 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:48.262987 | orchestrator | 2025-09-19 11:51:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:51.306700 | orchestrator | 2025-09-19 11:51:51 | INFO  | Task cd8593c7-5414-4bbe-9486-e44c3bc200b1 is in state SUCCESS 2025-09-19 11:51:51.308142 | orchestrator | 2025-09-19 11:51:51.308183 | orchestrator | 2025-09-19 11:51:51.308195 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:51:51.308207 | orchestrator | 2025-09-19 11:51:51.308219 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:51:51.308231 | orchestrator | Friday 19 September 2025 11:50:38 +0000 (0:00:00.627) 0:00:00.627 ****** 2025-09-19 11:51:51.308242 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:51:51.308255 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:51:51.308266 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:51:51.308277 | orchestrator | 2025-09-19 11:51:51.308288 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:51:51.308299 | orchestrator | Friday 19 September 2025 11:50:38 +0000 (0:00:00.380) 0:00:01.007 ****** 2025-09-19 11:51:51.308310 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-19 11:51:51.308321 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-19 11:51:51.308332 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-19 11:51:51.308343 | orchestrator | 2025-09-19 11:51:51.308354 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-19 11:51:51.308364 | orchestrator | 2025-09-19 11:51:51.308375 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 11:51:51.308386 | orchestrator | Friday 19 September 2025 11:50:39 +0000 (0:00:00.680) 0:00:01.688 ****** 2025-09-19 11:51:51.308396 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:51:51.308408 | orchestrator | 2025-09-19 11:51:51.308419 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-19 11:51:51.308429 | orchestrator | Friday 19 September 2025 11:50:40 +0000 (0:00:00.893) 0:00:02.581 ****** 2025-09-19 11:51:51.308440 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-19 11:51:51.308451 | orchestrator | 2025-09-19 11:51:51.308461 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-19 11:51:51.308472 | orchestrator | Friday 19 September 2025 11:50:43 +0000 (0:00:03.707) 0:00:06.289 ****** 2025-09-19 11:51:51.308509 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-19 11:51:51.308521 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-19 11:51:51.308531 | orchestrator | 2025-09-19 11:51:51.308542 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-19 11:51:51.308553 | orchestrator | Friday 19 September 2025 11:50:51 +0000 (0:00:07.221) 0:00:13.510 ****** 2025-09-19 11:51:51.308564 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:51:51.308574 | orchestrator | 2025-09-19 11:51:51.308585 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-19 11:51:51.308596 | orchestrator | Friday 19 September 2025 11:50:54 +0000 (0:00:03.705) 0:00:17.216 ****** 2025-09-19 11:51:51.308607 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:51:51.308618 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-19 11:51:51.308629 | orchestrator | 2025-09-19 11:51:51.308640 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-19 11:51:51.308650 | orchestrator | Friday 19 September 2025 11:50:58 +0000 (0:00:03.530) 0:00:20.747 ****** 2025-09-19 11:51:51.308679 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:51:51.308690 | orchestrator | 2025-09-19 11:51:51.308701 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-19 11:51:51.308711 | orchestrator | Friday 19 September 2025 11:51:01 +0000 (0:00:03.467) 0:00:24.214 ****** 2025-09-19 11:51:51.308746 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-19 11:51:51.308758 | orchestrator | 2025-09-19 11:51:51.308770 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 11:51:51.308781 | orchestrator | Friday 19 September 2025 11:51:06 +0000 (0:00:04.798) 0:00:29.013 ****** 2025-09-19 11:51:51.308793 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:51.308806 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:51.308817 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:51.308829 | orchestrator | 2025-09-19 11:51:51.308841 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-19 11:51:51.308853 | orchestrator | Friday 19 September 2025 11:51:06 +0000 (0:00:00.396) 0:00:29.409 ****** 2025-09-19 11:51:51.308869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:51.308899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:51.308913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:51.308926 | orchestrator | 2025-09-19 11:51:51.308938 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-19 11:51:51.308950 | orchestrator | Friday 19 September 2025 11:51:08 +0000 (0:00:01.423) 0:00:30.833 ****** 2025-09-19 11:51:51.308961 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:51.308973 | orchestrator | 2025-09-19 11:51:51.308985 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-19 11:51:51.309006 | orchestrator | Friday 19 September 2025 11:51:08 +0000 (0:00:00.218) 0:00:31.051 ****** 2025-09-19 11:51:51.309017 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:51.309029 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:51.309042 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:51.309054 | orchestrator | 2025-09-19 11:51:51.309065 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 11:51:51.309077 | orchestrator | Friday 19 September 2025 11:51:09 +0000 (0:00:00.673) 0:00:31.725 ****** 2025-09-19 11:51:51.309095 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:51:51.309108 | orchestrator | 2025-09-19 11:51:51.309118 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-19 11:51:51.309130 | orchestrator | Friday 19 September 2025 11:51:09 +0000 (0:00:00.548) 0:00:32.274 ****** 2025-09-19 11:51:51.309141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:51.309162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:51.309174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:51.309185 | orchestrator | 2025-09-19 11:51:51.309196 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-19 11:51:51.309207 | orchestrator | Friday 19 September 2025 11:51:11 +0000 (0:00:01.544) 0:00:33.818 ****** 2025-09-19 11:51:51.309219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:51:51.309238 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:51.309255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:51:51.309266 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:51.309284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:51:51.309296 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:51.309306 | orchestrator | 2025-09-19 11:51:51.309317 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-19 11:51:51.309329 | orchestrator | Friday 19 September 2025 11:51:12 +0000 (0:00:00.862) 0:00:34.681 ****** 2025-09-19 11:51:51.309340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:51:51.309351 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:51.309369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:51:51.309381 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:51.309406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:51:51.309418 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:51.309429 | orchestrator | 2025-09-19 11:51:51.309440 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-19 11:51:51.309450 | orchestrator | Friday 19 September 2025 11:51:12 +0000 (0:00:00.669) 0:00:35.350 ****** 2025-09-19 11:51:51.309467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:51.309603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:51.309632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:51.309644 | orchestrator | 2025-09-19 11:51:51.309655 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-19 11:51:51.309667 | orchestrator | Friday 19 September 2025 11:51:14 +0000 (0:00:01.340) 0:00:36.691 ****** 2025-09-19 11:51:51.309684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:51.309696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:51.309716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:51.309728 | orchestrator | 2025-09-19 11:51:51.309739 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-19 11:51:51.309757 | orchestrator | Friday 19 September 2025 11:51:16 +0000 (0:00:02.543) 0:00:39.234 ****** 2025-09-19 11:51:51.309768 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 11:51:51.309779 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 11:51:51.309790 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 11:51:51.309800 | orchestrator | 2025-09-19 11:51:51.309811 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-19 11:51:51.309822 | orchestrator | Friday 19 September 2025 11:51:18 +0000 (0:00:01.581) 0:00:40.816 ****** 2025-09-19 11:51:51.309833 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:51.309843 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:51:51.309854 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:51:51.309865 | orchestrator | 2025-09-19 11:51:51.309876 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-19 11:51:51.309886 | orchestrator | Friday 19 September 2025 11:51:19 +0000 (0:00:01.244) 0:00:42.060 ****** 2025-09-19 11:51:51.309903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:51:51.309914 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:51.309926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:51:51.309937 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:51.309955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:51:51.309973 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:51.309984 | orchestrator | 2025-09-19 11:51:51.309995 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-19 11:51:51.310006 | orchestrator | Friday 19 September 2025 11:51:20 +0000 (0:00:00.493) 0:00:42.554 ****** 2025-09-19 11:51:51.310073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:51.310089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:51.310106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:51.310118 | orchestrator | 2025-09-19 11:51:51.310129 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-19 11:51:51.310139 | orchestrator | Friday 19 September 2025 11:51:21 +0000 (0:00:01.173) 0:00:43.728 ****** 2025-09-19 11:51:51.310150 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:51.310161 | orchestrator | 2025-09-19 11:51:51.310172 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-19 11:51:51.310182 | orchestrator | Friday 19 September 2025 11:51:23 +0000 (0:00:02.655) 0:00:46.383 ****** 2025-09-19 11:51:51.310193 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:51.310203 | orchestrator | 2025-09-19 11:51:51.310214 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-19 11:51:51.310225 | orchestrator | Friday 19 September 2025 11:51:26 +0000 (0:00:02.335) 0:00:48.719 ****** 2025-09-19 11:51:51.310237 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:51.310255 | orchestrator | 2025-09-19 11:51:51.310267 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 11:51:51.310279 | orchestrator | Friday 19 September 2025 11:51:38 +0000 (0:00:12.069) 0:01:00.788 ****** 2025-09-19 11:51:51.310291 | orchestrator | 2025-09-19 11:51:51.310304 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 11:51:51.310316 | orchestrator | Friday 19 September 2025 11:51:38 +0000 (0:00:00.068) 0:01:00.856 ****** 2025-09-19 11:51:51.310328 | orchestrator | 2025-09-19 11:51:51.310348 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 11:51:51.310360 | orchestrator | Friday 19 September 2025 11:51:38 +0000 (0:00:00.073) 0:01:00.930 ****** 2025-09-19 11:51:51.310372 | orchestrator | 2025-09-19 11:51:51.310385 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-19 11:51:51.310397 | orchestrator | Friday 19 September 2025 11:51:38 +0000 (0:00:00.073) 0:01:01.003 ****** 2025-09-19 11:51:51.310409 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:51.310421 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:51:51.310433 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:51:51.310445 | orchestrator | 2025-09-19 11:51:51.310456 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:51:51.310469 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:51:51.310501 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:51:51.310514 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:51:51.310526 | orchestrator | 2025-09-19 11:51:51.310538 | orchestrator | 2025-09-19 11:51:51.310550 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:51:51.310563 | orchestrator | Friday 19 September 2025 11:51:48 +0000 (0:00:10.115) 0:01:11.119 ****** 2025-09-19 11:51:51.310574 | orchestrator | =============================================================================== 2025-09-19 11:51:51.310587 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.07s 2025-09-19 11:51:51.310598 | orchestrator | placement : Restart placement-api container ---------------------------- 10.12s 2025-09-19 11:51:51.310611 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.22s 2025-09-19 11:51:51.310623 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.80s 2025-09-19 11:51:51.310633 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.71s 2025-09-19 11:51:51.310644 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.71s 2025-09-19 11:51:51.310655 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.53s 2025-09-19 11:51:51.310763 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.47s 2025-09-19 11:51:51.310775 | orchestrator | placement : Creating placement databases -------------------------------- 2.66s 2025-09-19 11:51:51.310786 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.54s 2025-09-19 11:51:51.310797 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.34s 2025-09-19 11:51:51.310808 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.58s 2025-09-19 11:51:51.310819 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.54s 2025-09-19 11:51:51.310830 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.42s 2025-09-19 11:51:51.310847 | orchestrator | placement : Copying over config.json files for services ----------------- 1.34s 2025-09-19 11:51:51.310858 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.24s 2025-09-19 11:51:51.310877 | orchestrator | placement : Check placement containers ---------------------------------- 1.17s 2025-09-19 11:51:51.310888 | orchestrator | placement : include_tasks ----------------------------------------------- 0.89s 2025-09-19 11:51:51.310899 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.86s 2025-09-19 11:51:51.310910 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2025-09-19 11:51:51.310920 | orchestrator | 2025-09-19 11:51:51 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:51.310932 | orchestrator | 2025-09-19 11:51:51 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:51.310948 | orchestrator | 2025-09-19 11:51:51 | INFO  | Task 454a3e1a-23fc-4cb1-9ce4-c1c66f4579d8 is in state STARTED 2025-09-19 11:51:51.312033 | orchestrator | 2025-09-19 11:51:51 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:51.312153 | orchestrator | 2025-09-19 11:51:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:54.354683 | orchestrator | 2025-09-19 11:51:54 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:54.358787 | orchestrator | 2025-09-19 11:51:54 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:54.359958 | orchestrator | 2025-09-19 11:51:54 | INFO  | Task 454a3e1a-23fc-4cb1-9ce4-c1c66f4579d8 is in state STARTED 2025-09-19 11:51:54.361601 | orchestrator | 2025-09-19 11:51:54 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:54.361672 | orchestrator | 2025-09-19 11:51:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:57.402599 | orchestrator | 2025-09-19 11:51:57 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:51:57.403223 | orchestrator | 2025-09-19 11:51:57 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:51:57.405156 | orchestrator | 2025-09-19 11:51:57 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:51:57.406338 | orchestrator | 2025-09-19 11:51:57 | INFO  | Task 454a3e1a-23fc-4cb1-9ce4-c1c66f4579d8 is in state SUCCESS 2025-09-19 11:51:57.407665 | orchestrator | 2025-09-19 11:51:57 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:51:57.408618 | orchestrator | 2025-09-19 11:51:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:00.441647 | orchestrator | 2025-09-19 11:52:00 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:52:00.443591 | orchestrator | 2025-09-19 11:52:00 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:00.444887 | orchestrator | 2025-09-19 11:52:00 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:00.447150 | orchestrator | 2025-09-19 11:52:00 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:00.447176 | orchestrator | 2025-09-19 11:52:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:03.487391 | orchestrator | 2025-09-19 11:52:03 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:52:03.487529 | orchestrator | 2025-09-19 11:52:03 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:03.487715 | orchestrator | 2025-09-19 11:52:03 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:03.488337 | orchestrator | 2025-09-19 11:52:03 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:03.488356 | orchestrator | 2025-09-19 11:52:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:06.541988 | orchestrator | 2025-09-19 11:52:06 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:52:06.543722 | orchestrator | 2025-09-19 11:52:06 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:06.545441 | orchestrator | 2025-09-19 11:52:06 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:06.547576 | orchestrator | 2025-09-19 11:52:06 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:06.547630 | orchestrator | 2025-09-19 11:52:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:09.614579 | orchestrator | 2025-09-19 11:52:09 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:52:09.616863 | orchestrator | 2025-09-19 11:52:09 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:09.619927 | orchestrator | 2025-09-19 11:52:09 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:09.622910 | orchestrator | 2025-09-19 11:52:09 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:09.622958 | orchestrator | 2025-09-19 11:52:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:12.669564 | orchestrator | 2025-09-19 11:52:12 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:52:12.671626 | orchestrator | 2025-09-19 11:52:12 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:12.674512 | orchestrator | 2025-09-19 11:52:12 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:12.676814 | orchestrator | 2025-09-19 11:52:12 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:12.676836 | orchestrator | 2025-09-19 11:52:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:15.726373 | orchestrator | 2025-09-19 11:52:15 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:52:15.726819 | orchestrator | 2025-09-19 11:52:15 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:15.729109 | orchestrator | 2025-09-19 11:52:15 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:15.730906 | orchestrator | 2025-09-19 11:52:15 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:15.731121 | orchestrator | 2025-09-19 11:52:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:18.784531 | orchestrator | 2025-09-19 11:52:18 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:52:18.786817 | orchestrator | 2025-09-19 11:52:18 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:18.788614 | orchestrator | 2025-09-19 11:52:18 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:18.790805 | orchestrator | 2025-09-19 11:52:18 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:18.790836 | orchestrator | 2025-09-19 11:52:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:21.834377 | orchestrator | 2025-09-19 11:52:21 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:52:21.836932 | orchestrator | 2025-09-19 11:52:21 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:21.838621 | orchestrator | 2025-09-19 11:52:21 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:21.840407 | orchestrator | 2025-09-19 11:52:21 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:21.840518 | orchestrator | 2025-09-19 11:52:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:24.881776 | orchestrator | 2025-09-19 11:52:24 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:52:24.883312 | orchestrator | 2025-09-19 11:52:24 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:24.884769 | orchestrator | 2025-09-19 11:52:24 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:24.886404 | orchestrator | 2025-09-19 11:52:24 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:24.886457 | orchestrator | 2025-09-19 11:52:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:27.929312 | orchestrator | 2025-09-19 11:52:27 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:52:27.929744 | orchestrator | 2025-09-19 11:52:27 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:27.930923 | orchestrator | 2025-09-19 11:52:27 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:27.934206 | orchestrator | 2025-09-19 11:52:27 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:27.934259 | orchestrator | 2025-09-19 11:52:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:30.978344 | orchestrator | 2025-09-19 11:52:30 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:52:30.978746 | orchestrator | 2025-09-19 11:52:30 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:30.979792 | orchestrator | 2025-09-19 11:52:30 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:30.981007 | orchestrator | 2025-09-19 11:52:30 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:30.981031 | orchestrator | 2025-09-19 11:52:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:34.039376 | orchestrator | 2025-09-19 11:52:34 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:52:34.045370 | orchestrator | 2025-09-19 11:52:34 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:34.047469 | orchestrator | 2025-09-19 11:52:34 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:34.051072 | orchestrator | 2025-09-19 11:52:34 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:34.051863 | orchestrator | 2025-09-19 11:52:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:37.094566 | orchestrator | 2025-09-19 11:52:37 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:52:37.096824 | orchestrator | 2025-09-19 11:52:37 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:37.099950 | orchestrator | 2025-09-19 11:52:37 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:37.102131 | orchestrator | 2025-09-19 11:52:37 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:37.102377 | orchestrator | 2025-09-19 11:52:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:40.147525 | orchestrator | 2025-09-19 11:52:40 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state STARTED 2025-09-19 11:52:40.150628 | orchestrator | 2025-09-19 11:52:40 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:40.153398 | orchestrator | 2025-09-19 11:52:40 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:40.155386 | orchestrator | 2025-09-19 11:52:40 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:40.155954 | orchestrator | 2025-09-19 11:52:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:43.207693 | orchestrator | 2025-09-19 11:52:43 | INFO  | Task b610dd8c-0422-4978-a4a3-6aae3cc827db is in state SUCCESS 2025-09-19 11:52:43.209323 | orchestrator | 2025-09-19 11:52:43.209366 | orchestrator | 2025-09-19 11:52:43.209379 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:52:43.209391 | orchestrator | 2025-09-19 11:52:43.209403 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:52:43.209714 | orchestrator | Friday 19 September 2025 11:51:53 +0000 (0:00:00.183) 0:00:00.183 ****** 2025-09-19 11:52:43.209735 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:52:43.209748 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:52:43.209759 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:52:43.209771 | orchestrator | 2025-09-19 11:52:43.209782 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:52:43.209793 | orchestrator | Friday 19 September 2025 11:51:53 +0000 (0:00:00.297) 0:00:00.480 ****** 2025-09-19 11:52:43.209805 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-19 11:52:43.209816 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-19 11:52:43.209827 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-19 11:52:43.209838 | orchestrator | 2025-09-19 11:52:43.209848 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-19 11:52:43.209859 | orchestrator | 2025-09-19 11:52:43.209870 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-19 11:52:43.209881 | orchestrator | Friday 19 September 2025 11:51:54 +0000 (0:00:00.630) 0:00:01.111 ****** 2025-09-19 11:52:43.209891 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:52:43.209902 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:52:43.209913 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:52:43.209923 | orchestrator | 2025-09-19 11:52:43.209934 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:52:43.209946 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:52:43.209959 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:52:43.209970 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:52:43.209980 | orchestrator | 2025-09-19 11:52:43.209991 | orchestrator | 2025-09-19 11:52:43.210002 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:52:43.210062 | orchestrator | Friday 19 September 2025 11:51:54 +0000 (0:00:00.626) 0:00:01.738 ****** 2025-09-19 11:52:43.210077 | orchestrator | =============================================================================== 2025-09-19 11:52:43.210105 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-09-19 11:52:43.210116 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.63s 2025-09-19 11:52:43.210127 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-09-19 11:52:43.210138 | orchestrator | 2025-09-19 11:52:43.210149 | orchestrator | 2025-09-19 11:52:43.210159 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:52:43.210170 | orchestrator | 2025-09-19 11:52:43.210180 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:52:43.210191 | orchestrator | Friday 19 September 2025 11:50:52 +0000 (0:00:00.206) 0:00:00.206 ****** 2025-09-19 11:52:43.210223 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:52:43.210234 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:52:43.210245 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:52:43.210256 | orchestrator | 2025-09-19 11:52:43.210267 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:52:43.210278 | orchestrator | Friday 19 September 2025 11:50:52 +0000 (0:00:00.227) 0:00:00.433 ****** 2025-09-19 11:52:43.210289 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-19 11:52:43.210300 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-19 11:52:43.210310 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-19 11:52:43.210321 | orchestrator | 2025-09-19 11:52:43.210332 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-19 11:52:43.210345 | orchestrator | 2025-09-19 11:52:43.210357 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 11:52:43.210369 | orchestrator | Friday 19 September 2025 11:50:53 +0000 (0:00:00.335) 0:00:00.769 ****** 2025-09-19 11:52:43.210382 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:52:43.210394 | orchestrator | 2025-09-19 11:52:43.210407 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-19 11:52:43.210563 | orchestrator | Friday 19 September 2025 11:50:53 +0000 (0:00:00.513) 0:00:01.283 ****** 2025-09-19 11:52:43.210579 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-19 11:52:43.210592 | orchestrator | 2025-09-19 11:52:43.210604 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-19 11:52:43.210616 | orchestrator | Friday 19 September 2025 11:50:57 +0000 (0:00:03.288) 0:00:04.572 ****** 2025-09-19 11:52:43.210627 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-19 11:52:43.210640 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-19 11:52:43.210651 | orchestrator | 2025-09-19 11:52:43.210663 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-19 11:52:43.210675 | orchestrator | Friday 19 September 2025 11:51:03 +0000 (0:00:06.805) 0:00:11.377 ****** 2025-09-19 11:52:43.210688 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:52:43.210700 | orchestrator | 2025-09-19 11:52:43.210712 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-19 11:52:43.210723 | orchestrator | Friday 19 September 2025 11:51:07 +0000 (0:00:03.643) 0:00:15.021 ****** 2025-09-19 11:52:43.210746 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:52:43.210757 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-19 11:52:43.210768 | orchestrator | 2025-09-19 11:52:43.210779 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-19 11:52:43.210790 | orchestrator | Friday 19 September 2025 11:51:11 +0000 (0:00:04.317) 0:00:19.338 ****** 2025-09-19 11:52:43.210800 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:52:43.210811 | orchestrator | 2025-09-19 11:52:43.210822 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-19 11:52:43.210832 | orchestrator | Friday 19 September 2025 11:51:15 +0000 (0:00:03.605) 0:00:22.943 ****** 2025-09-19 11:52:43.210843 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-19 11:52:43.210853 | orchestrator | 2025-09-19 11:52:43.210864 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-19 11:52:43.210875 | orchestrator | Friday 19 September 2025 11:51:19 +0000 (0:00:03.799) 0:00:26.743 ****** 2025-09-19 11:52:43.210885 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:52:43.210896 | orchestrator | 2025-09-19 11:52:43.210907 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-19 11:52:43.210929 | orchestrator | Friday 19 September 2025 11:51:22 +0000 (0:00:03.240) 0:00:29.983 ****** 2025-09-19 11:52:43.210940 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:52:43.210950 | orchestrator | 2025-09-19 11:52:43.210961 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-19 11:52:43.210972 | orchestrator | Friday 19 September 2025 11:51:26 +0000 (0:00:04.132) 0:00:34.116 ****** 2025-09-19 11:52:43.210982 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:52:43.210993 | orchestrator | 2025-09-19 11:52:43.211003 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-19 11:52:43.211014 | orchestrator | Friday 19 September 2025 11:51:30 +0000 (0:00:04.257) 0:00:38.373 ****** 2025-09-19 11:52:43.211036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.211052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.211064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.211086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.211106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.211122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.211134 | orchestrator | 2025-09-19 11:52:43.211145 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-19 11:52:43.211156 | orchestrator | Friday 19 September 2025 11:51:32 +0000 (0:00:01.739) 0:00:40.113 ****** 2025-09-19 11:52:43.211167 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:52:43.211178 | orchestrator | 2025-09-19 11:52:43.211189 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-19 11:52:43.211200 | orchestrator | Friday 19 September 2025 11:51:32 +0000 (0:00:00.158) 0:00:40.272 ****** 2025-09-19 11:52:43.211211 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:52:43.211222 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:52:43.211232 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:52:43.211243 | orchestrator | 2025-09-19 11:52:43.211254 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-19 11:52:43.211265 | orchestrator | Friday 19 September 2025 11:51:33 +0000 (0:00:00.645) 0:00:40.918 ****** 2025-09-19 11:52:43.211276 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:52:43.211287 | orchestrator | 2025-09-19 11:52:43.211297 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-19 11:52:43.211308 | orchestrator | Friday 19 September 2025 11:51:34 +0000 (0:00:00.859) 0:00:41.777 ****** 2025-09-19 11:52:43.211319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.211340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.211358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.211374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.211386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.211398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.211409 | orchestrator | 2025-09-19 11:52:43.211439 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-19 11:52:43.211458 | orchestrator | Friday 19 September 2025 11:51:36 +0000 (0:00:02.179) 0:00:43.957 ****** 2025-09-19 11:52:43.211469 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:52:43.211479 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:52:43.211490 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:52:43.211501 | orchestrator | 2025-09-19 11:52:43.211512 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 11:52:43.211530 | orchestrator | Friday 19 September 2025 11:51:36 +0000 (0:00:00.321) 0:00:44.278 ****** 2025-09-19 11:52:43.211542 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:52:43.211553 | orchestrator | 2025-09-19 11:52:43.211564 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-19 11:52:43.211574 | orchestrator | Friday 19 September 2025 11:51:37 +0000 (0:00:00.866) 0:00:45.144 ****** 2025-09-19 11:52:43.211586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.211602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.211614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.211626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.211653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.211666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.211677 | orchestrator | 2025-09-19 11:52:43.211688 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-19 11:52:43.211699 | orchestrator | Friday 19 September 2025 11:51:40 +0000 (0:00:02.549) 0:00:47.694 ****** 2025-09-19 11:52:43.211715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:52:43.211727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:52:43.211738 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:52:43.211750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:52:43.211784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:52:43.211796 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:52:43.211807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:52:43.211824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:52:43.211835 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:52:43.211846 | orchestrator | 2025-09-19 11:52:43.211857 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-19 11:52:43.211868 | orchestrator | Friday 19 September 2025 11:51:40 +0000 (0:00:00.643) 0:00:48.337 ****** 2025-09-19 11:52:43.211879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:52:43.211900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:52:43.211911 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:52:43.211929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:52:43.211941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:52:43.211952 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:52:43.211969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:52:43.211981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:52:43.211998 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:52:43.212009 | orchestrator | 2025-09-19 11:52:43.212020 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-19 11:52:43.212031 | orchestrator | Friday 19 September 2025 11:51:41 +0000 (0:00:00.980) 0:00:49.318 ****** 2025-09-19 11:52:43.212049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.212061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.212078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.212090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.212108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.212126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.212138 | orchestrator | 2025-09-19 11:52:43.212149 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-19 11:52:43.212160 | orchestrator | Friday 19 September 2025 11:51:44 +0000 (0:00:02.736) 0:00:52.054 ****** 2025-09-19 11:52:43.212171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.212188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.212207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.212328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.212357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.212370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.212381 | orchestrator | 2025-09-19 11:52:43.212392 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-19 11:52:43.212403 | orchestrator | Friday 19 September 2025 11:51:49 +0000 (0:00:05.214) 0:00:57.268 ****** 2025-09-19 11:52:43.212481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:52:43.212506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:52:43.212517 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:52:43.212529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:52:43.212548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:52:43.212560 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:52:43.212571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:52:43.212587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:52:43.212606 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:52:43.212617 | orchestrator | 2025-09-19 11:52:43.212628 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-19 11:52:43.212639 | orchestrator | Friday 19 September 2025 11:51:50 +0000 (0:00:00.682) 0:00:57.951 ****** 2025-09-19 11:52:43.212650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.212667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.212679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:52:43.212691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.212713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.212724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:52:43.212734 | orchestrator | 2025-09-19 11:52:43.212744 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 11:52:43.212753 | orchestrator | Friday 19 September 2025 11:51:53 +0000 (0:00:02.625) 0:01:00.576 ****** 2025-09-19 11:52:43.212763 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:52:43.212772 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:52:43.212782 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:52:43.212791 | orchestrator | 2025-09-19 11:52:43.212801 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-19 11:52:43.212810 | orchestrator | Friday 19 September 2025 11:51:53 +0000 (0:00:00.342) 0:01:00.919 ****** 2025-09-19 11:52:43.212820 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:52:43.212829 | orchestrator | 2025-09-19 11:52:43.212838 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-19 11:52:43.212848 | orchestrator | Friday 19 September 2025 11:51:55 +0000 (0:00:01.916) 0:01:02.836 ****** 2025-09-19 11:52:43.212857 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:52:43.212867 | orchestrator | 2025-09-19 11:52:43.212876 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-19 11:52:43.212886 | orchestrator | Friday 19 September 2025 11:51:57 +0000 (0:00:01.965) 0:01:04.801 ****** 2025-09-19 11:52:43.212901 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:52:43.212911 | orchestrator | 2025-09-19 11:52:43.212920 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 11:52:43.212930 | orchestrator | Friday 19 September 2025 11:52:14 +0000 (0:00:17.485) 0:01:22.287 ****** 2025-09-19 11:52:43.212939 | orchestrator | 2025-09-19 11:52:43.212950 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 11:52:43.212961 | orchestrator | Friday 19 September 2025 11:52:14 +0000 (0:00:00.081) 0:01:22.368 ****** 2025-09-19 11:52:43.212971 | orchestrator | 2025-09-19 11:52:43.212981 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 11:52:43.212992 | orchestrator | Friday 19 September 2025 11:52:14 +0000 (0:00:00.072) 0:01:22.441 ****** 2025-09-19 11:52:43.213002 | orchestrator | 2025-09-19 11:52:43.213013 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-19 11:52:43.213034 | orchestrator | Friday 19 September 2025 11:52:15 +0000 (0:00:00.069) 0:01:22.511 ****** 2025-09-19 11:52:43.213044 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:52:43.213055 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:52:43.213066 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:52:43.213076 | orchestrator | 2025-09-19 11:52:43.213087 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-19 11:52:43.213097 | orchestrator | Friday 19 September 2025 11:52:29 +0000 (0:00:14.668) 0:01:37.180 ****** 2025-09-19 11:52:43.213108 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:52:43.213119 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:52:43.213129 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:52:43.213140 | orchestrator | 2025-09-19 11:52:43.213151 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:52:43.213162 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:52:43.213173 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:52:43.213184 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:52:43.213195 | orchestrator | 2025-09-19 11:52:43.213205 | orchestrator | 2025-09-19 11:52:43.213216 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:52:43.213227 | orchestrator | Friday 19 September 2025 11:52:39 +0000 (0:00:10.219) 0:01:47.399 ****** 2025-09-19 11:52:43.213242 | orchestrator | =============================================================================== 2025-09-19 11:52:43.213253 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.49s 2025-09-19 11:52:43.213263 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.67s 2025-09-19 11:52:43.213274 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.22s 2025-09-19 11:52:43.213285 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.81s 2025-09-19 11:52:43.213295 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.21s 2025-09-19 11:52:43.213306 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.32s 2025-09-19 11:52:43.213316 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.26s 2025-09-19 11:52:43.213325 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.13s 2025-09-19 11:52:43.213334 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.79s 2025-09-19 11:52:43.213344 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.64s 2025-09-19 11:52:43.213353 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.61s 2025-09-19 11:52:43.213362 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.29s 2025-09-19 11:52:43.213372 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.24s 2025-09-19 11:52:43.213381 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.74s 2025-09-19 11:52:43.213390 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.63s 2025-09-19 11:52:43.213400 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.55s 2025-09-19 11:52:43.213409 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.18s 2025-09-19 11:52:43.213435 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 1.97s 2025-09-19 11:52:43.213445 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.92s 2025-09-19 11:52:43.213455 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.74s 2025-09-19 11:52:43.213464 | orchestrator | 2025-09-19 11:52:43 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:43.213480 | orchestrator | 2025-09-19 11:52:43 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:43.213925 | orchestrator | 2025-09-19 11:52:43 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:43.213946 | orchestrator | 2025-09-19 11:52:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:46.264225 | orchestrator | 2025-09-19 11:52:46 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:46.266425 | orchestrator | 2025-09-19 11:52:46 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:46.269033 | orchestrator | 2025-09-19 11:52:46 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:46.269044 | orchestrator | 2025-09-19 11:52:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:49.354829 | orchestrator | 2025-09-19 11:52:49 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:49.354926 | orchestrator | 2025-09-19 11:52:49 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:49.355909 | orchestrator | 2025-09-19 11:52:49 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:49.355933 | orchestrator | 2025-09-19 11:52:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:52.396216 | orchestrator | 2025-09-19 11:52:52 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:52.397739 | orchestrator | 2025-09-19 11:52:52 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:52.398673 | orchestrator | 2025-09-19 11:52:52 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:52.398771 | orchestrator | 2025-09-19 11:52:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:55.424265 | orchestrator | 2025-09-19 11:52:55 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:55.424737 | orchestrator | 2025-09-19 11:52:55 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:55.427743 | orchestrator | 2025-09-19 11:52:55 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:55.427779 | orchestrator | 2025-09-19 11:52:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:52:58.462607 | orchestrator | 2025-09-19 11:52:58 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:52:58.462980 | orchestrator | 2025-09-19 11:52:58 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:52:58.465487 | orchestrator | 2025-09-19 11:52:58 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:52:58.465509 | orchestrator | 2025-09-19 11:52:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:01.500557 | orchestrator | 2025-09-19 11:53:01 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:01.501445 | orchestrator | 2025-09-19 11:53:01 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:53:01.503072 | orchestrator | 2025-09-19 11:53:01 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:53:01.503115 | orchestrator | 2025-09-19 11:53:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:04.547155 | orchestrator | 2025-09-19 11:53:04 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:04.548554 | orchestrator | 2025-09-19 11:53:04 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:53:04.550698 | orchestrator | 2025-09-19 11:53:04 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:53:04.550735 | orchestrator | 2025-09-19 11:53:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:07.594453 | orchestrator | 2025-09-19 11:53:07 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:07.595412 | orchestrator | 2025-09-19 11:53:07 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:53:07.596981 | orchestrator | 2025-09-19 11:53:07 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:53:07.597030 | orchestrator | 2025-09-19 11:53:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:10.640642 | orchestrator | 2025-09-19 11:53:10 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:10.642367 | orchestrator | 2025-09-19 11:53:10 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:53:10.646531 | orchestrator | 2025-09-19 11:53:10 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:53:10.646580 | orchestrator | 2025-09-19 11:53:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:13.686226 | orchestrator | 2025-09-19 11:53:13 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:13.686950 | orchestrator | 2025-09-19 11:53:13 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:53:13.688740 | orchestrator | 2025-09-19 11:53:13 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:53:13.689151 | orchestrator | 2025-09-19 11:53:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:16.749214 | orchestrator | 2025-09-19 11:53:16 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:16.752751 | orchestrator | 2025-09-19 11:53:16 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:53:16.754996 | orchestrator | 2025-09-19 11:53:16 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:53:16.755104 | orchestrator | 2025-09-19 11:53:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:19.806792 | orchestrator | 2025-09-19 11:53:19 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:19.810987 | orchestrator | 2025-09-19 11:53:19 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:53:19.814578 | orchestrator | 2025-09-19 11:53:19 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state STARTED 2025-09-19 11:53:19.814628 | orchestrator | 2025-09-19 11:53:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:22.863574 | orchestrator | 2025-09-19 11:53:22 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:22.865094 | orchestrator | 2025-09-19 11:53:22 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:53:22.868355 | orchestrator | 2025-09-19 11:53:22 | INFO  | Task 2266b86c-cf42-4abb-8099-6801b4125a87 is in state SUCCESS 2025-09-19 11:53:22.870543 | orchestrator | 2025-09-19 11:53:22.870579 | orchestrator | 2025-09-19 11:53:22.870592 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:53:22.870604 | orchestrator | 2025-09-19 11:53:22.870614 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:53:22.870977 | orchestrator | Friday 19 September 2025 11:51:04 +0000 (0:00:00.278) 0:00:00.278 ****** 2025-09-19 11:53:22.871015 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:53:22.871026 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:53:22.871036 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:53:22.871046 | orchestrator | 2025-09-19 11:53:22.871105 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:53:22.871117 | orchestrator | Friday 19 September 2025 11:51:04 +0000 (0:00:00.301) 0:00:00.580 ****** 2025-09-19 11:53:22.871128 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-19 11:53:22.871139 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-19 11:53:22.871149 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-19 11:53:22.871158 | orchestrator | 2025-09-19 11:53:22.871168 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-19 11:53:22.871178 | orchestrator | 2025-09-19 11:53:22.871188 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-19 11:53:22.871198 | orchestrator | Friday 19 September 2025 11:51:04 +0000 (0:00:00.411) 0:00:00.992 ****** 2025-09-19 11:53:22.871207 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:53:22.871218 | orchestrator | 2025-09-19 11:53:22.871228 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-19 11:53:22.871237 | orchestrator | Friday 19 September 2025 11:51:05 +0000 (0:00:00.532) 0:00:01.525 ****** 2025-09-19 11:53:22.871250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:53:22.871264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:53:22.871275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:53:22.871285 | orchestrator | 2025-09-19 11:53:22.871295 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-19 11:53:22.871305 | orchestrator | Friday 19 September 2025 11:51:06 +0000 (0:00:01.242) 0:00:02.767 ****** 2025-09-19 11:53:22.871314 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-19 11:53:22.871325 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-19 11:53:22.871403 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:53:22.871414 | orchestrator | 2025-09-19 11:53:22.871955 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-19 11:53:22.871976 | orchestrator | Friday 19 September 2025 11:51:08 +0000 (0:00:01.596) 0:00:04.364 ****** 2025-09-19 11:53:22.871986 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:53:22.871996 | orchestrator | 2025-09-19 11:53:22.872006 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-19 11:53:22.872016 | orchestrator | Friday 19 September 2025 11:51:08 +0000 (0:00:00.746) 0:00:05.110 ****** 2025-09-19 11:53:22.872067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:53:22.872080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:53:22.872090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:53:22.872101 | orchestrator | 2025-09-19 11:53:22.872111 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-19 11:53:22.872121 | orchestrator | Friday 19 September 2025 11:51:10 +0000 (0:00:01.566) 0:00:06.677 ****** 2025-09-19 11:53:22.872131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:53:22.872141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:53:22.872161 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:22.872171 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:22.872209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:53:22.872220 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:22.872230 | orchestrator | 2025-09-19 11:53:22.872240 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-19 11:53:22.872255 | orchestrator | Friday 19 September 2025 11:51:10 +0000 (0:00:00.335) 0:00:07.012 ****** 2025-09-19 11:53:22.872265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:53:22.872276 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:22.872286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:53:22.872296 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:22.872306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:53:22.872316 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:22.872325 | orchestrator | 2025-09-19 11:53:22.872335 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-19 11:53:22.872345 | orchestrator | Friday 19 September 2025 11:51:11 +0000 (0:00:00.941) 0:00:07.954 ****** 2025-09-19 11:53:22.872355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:53:22.872397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:53:22.872443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:53:22.872455 | orchestrator | 2025-09-19 11:53:22.872465 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-19 11:53:22.872475 | orchestrator | Friday 19 September 2025 11:51:13 +0000 (0:00:01.510) 0:00:09.465 ****** 2025-09-19 11:53:22.872484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:53:22.872495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:53:22.872505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:53:22.872522 | orchestrator | 2025-09-19 11:53:22.872531 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-19 11:53:22.872541 | orchestrator | Friday 19 September 2025 11:51:14 +0000 (0:00:01.281) 0:00:10.746 ****** 2025-09-19 11:53:22.872551 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:22.872561 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:22.872571 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:22.872580 | orchestrator | 2025-09-19 11:53:22.872590 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-19 11:53:22.872600 | orchestrator | Friday 19 September 2025 11:51:15 +0000 (0:00:00.523) 0:00:11.269 ****** 2025-09-19 11:53:22.872609 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 11:53:22.872619 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 11:53:22.872628 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 11:53:22.872638 | orchestrator | 2025-09-19 11:53:22.872648 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-19 11:53:22.872657 | orchestrator | Friday 19 September 2025 11:51:16 +0000 (0:00:01.716) 0:00:12.986 ****** 2025-09-19 11:53:22.872667 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 11:53:22.872677 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 11:53:22.872686 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 11:53:22.872696 | orchestrator | 2025-09-19 11:53:22.872705 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-19 11:53:22.872715 | orchestrator | Friday 19 September 2025 11:51:17 +0000 (0:00:01.194) 0:00:14.181 ****** 2025-09-19 11:53:22.872751 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:53:22.872763 | orchestrator | 2025-09-19 11:53:22.872773 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-19 11:53:22.872782 | orchestrator | Friday 19 September 2025 11:51:18 +0000 (0:00:00.764) 0:00:14.946 ****** 2025-09-19 11:53:22.872797 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-19 11:53:22.872807 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-19 11:53:22.872817 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:53:22.872826 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:53:22.872836 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:53:22.872845 | orchestrator | 2025-09-19 11:53:22.872855 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-19 11:53:22.872865 | orchestrator | Friday 19 September 2025 11:51:19 +0000 (0:00:00.639) 0:00:15.585 ****** 2025-09-19 11:53:22.872874 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:22.872884 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:22.872894 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:22.872903 | orchestrator | 2025-09-19 11:53:22.872913 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-19 11:53:22.872923 | orchestrator | Friday 19 September 2025 11:51:19 +0000 (0:00:00.543) 0:00:16.128 ****** 2025-09-19 11:53:22.872933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1071200, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7392123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.872952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1071200, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7392123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.872962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1071200, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7392123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.872972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1071245, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7536323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1071245, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7536323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1071245, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7536323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1071207, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.742914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1071207, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.742914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1071207, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.742914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1071251, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7561722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1071251, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7561722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1071251, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7561722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1071222, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7474213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1071222, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7474213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1071222, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7474213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1071234, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7517474, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1071234, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7517474, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1071234, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7517474, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1071198, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7375221, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1071198, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7375221, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1071198, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7375221, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1071204, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7404017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1071204, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7404017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1071204, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7404017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1071210, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.742914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1071210, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.742914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1071210, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.742914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1071226, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.748943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1071226, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.748943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1071226, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.748943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1071241, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7527277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1071241, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7527277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1071241, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7527277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1071206, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7406044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1071206, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7406044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1071206, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7406044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1071231, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.750937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1071231, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.750937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1071231, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.750937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1071224, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.747743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1071224, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.747743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1071224, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.747743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1071217, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7470984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1071217, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7470984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1071217, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7470984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1071214, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7458482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1071214, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7458482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1071214, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7458482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1071228, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7500298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1071228, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7500298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1071228, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7500298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1071211, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7436047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1071211, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7436047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1071211, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7436047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1071237, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.752246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1071237, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.752246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1071237, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.752246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1071393, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7905781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1071393, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7905781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1071393, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7905781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1071301, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7684686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1071301, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7684686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1071301, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7684686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1071283, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7609832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1071283, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7609832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1071283, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7609832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1071326, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7734833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1071326, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7734833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1071326, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7734833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.873999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1071261, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7567217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1071261, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7567217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1071261, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7567217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1071367, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7835362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1071367, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7835362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1071367, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7835362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1071330, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7805865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1071330, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7805865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1071330, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7805865, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1071372, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7835362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1071372, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7835362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1071372, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7835362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1071388, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7897494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1071388, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7897494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1071388, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7897494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1071364, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7819972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1071364, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7819972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1071364, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7819972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1071317, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7706423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1071317, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7706423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1071317, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7706423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1071293, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7643542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1071293, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7643542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1071293, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7643542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1071314, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.768605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1071314, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.768605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1071314, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.768605, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1071286, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7620108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1071286, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7620108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1071286, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7620108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1071324, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7721243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1071324, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7721243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1071324, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7721243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1071383, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7876053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1071383, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7876053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1071383, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7876053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1071378, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7856052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1071378, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7856052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1071378, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7856052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1071274, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.760015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1071274, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.760015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1071274, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.760015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1071277, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7605226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1071277, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7605226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1071277, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7605226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1071361, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7811782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1071361, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7811782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1071361, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7811782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1071375, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7849967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1071375, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7849967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1071375, 'dev': 153, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758279758.7849967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:53:22.874752 | orchestrator | 2025-09-19 11:53:22.874763 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-19 11:53:22.874773 | orchestrator | Friday 19 September 2025 11:51:57 +0000 (0:00:37.479) 0:00:53.608 ****** 2025-09-19 11:53:22.874783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:53:22.874799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:53:22.874809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:53:22.874819 | orchestrator | 2025-09-19 11:53:22.874829 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-19 11:53:22.874839 | orchestrator | Friday 19 September 2025 11:51:58 +0000 (0:00:00.970) 0:00:54.578 ****** 2025-09-19 11:53:22.874848 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:22.874858 | orchestrator | 2025-09-19 11:53:22.874868 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-19 11:53:22.874877 | orchestrator | Friday 19 September 2025 11:52:00 +0000 (0:00:02.273) 0:00:56.852 ****** 2025-09-19 11:53:22.874887 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:22.874897 | orchestrator | 2025-09-19 11:53:22.874906 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 11:53:22.874916 | orchestrator | Friday 19 September 2025 11:52:02 +0000 (0:00:02.303) 0:00:59.155 ****** 2025-09-19 11:53:22.874926 | orchestrator | 2025-09-19 11:53:22.874936 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 11:53:22.874950 | orchestrator | Friday 19 September 2025 11:52:02 +0000 (0:00:00.059) 0:00:59.215 ****** 2025-09-19 11:53:22.874959 | orchestrator | 2025-09-19 11:53:22.874969 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 11:53:22.874979 | orchestrator | Friday 19 September 2025 11:52:03 +0000 (0:00:00.060) 0:00:59.276 ****** 2025-09-19 11:53:22.874988 | orchestrator | 2025-09-19 11:53:22.875002 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-19 11:53:22.875012 | orchestrator | Friday 19 September 2025 11:52:03 +0000 (0:00:00.238) 0:00:59.514 ****** 2025-09-19 11:53:22.875022 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:22.875032 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:22.875041 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:22.875051 | orchestrator | 2025-09-19 11:53:22.875061 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-19 11:53:22.875070 | orchestrator | Friday 19 September 2025 11:52:05 +0000 (0:00:01.936) 0:01:01.451 ****** 2025-09-19 11:53:22.875080 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:22.875090 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:22.875100 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-19 11:53:22.875115 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-19 11:53:22.875125 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-19 11:53:22.875135 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:53:22.875145 | orchestrator | 2025-09-19 11:53:22.875155 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-19 11:53:22.875164 | orchestrator | Friday 19 September 2025 11:52:44 +0000 (0:00:39.161) 0:01:40.612 ****** 2025-09-19 11:53:22.875174 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:22.875183 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:53:22.875193 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:53:22.875203 | orchestrator | 2025-09-19 11:53:22.875212 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-19 11:53:22.875222 | orchestrator | Friday 19 September 2025 11:53:17 +0000 (0:00:32.744) 0:02:13.357 ****** 2025-09-19 11:53:22.875232 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:53:22.875241 | orchestrator | 2025-09-19 11:53:22.875252 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-19 11:53:22.875269 | orchestrator | Friday 19 September 2025 11:53:19 +0000 (0:00:02.141) 0:02:15.499 ****** 2025-09-19 11:53:22.875284 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:22.875300 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:22.875317 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:22.875334 | orchestrator | 2025-09-19 11:53:22.875345 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-19 11:53:22.875355 | orchestrator | Friday 19 September 2025 11:53:19 +0000 (0:00:00.607) 0:02:16.106 ****** 2025-09-19 11:53:22.875385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-19 11:53:22.875398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-19 11:53:22.875409 | orchestrator | 2025-09-19 11:53:22.875419 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-19 11:53:22.875428 | orchestrator | Friday 19 September 2025 11:53:22 +0000 (0:00:02.309) 0:02:18.415 ****** 2025-09-19 11:53:22.875438 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:22.875448 | orchestrator | 2025-09-19 11:53:22.875457 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:53:22.875468 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 11:53:22.875478 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 11:53:22.875488 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 11:53:22.875498 | orchestrator | 2025-09-19 11:53:22.875507 | orchestrator | 2025-09-19 11:53:22.875517 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:53:22.875527 | orchestrator | Friday 19 September 2025 11:53:22 +0000 (0:00:00.269) 0:02:18.685 ****** 2025-09-19 11:53:22.875536 | orchestrator | =============================================================================== 2025-09-19 11:53:22.875546 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 39.16s 2025-09-19 11:53:22.875555 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.48s 2025-09-19 11:53:22.875572 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.74s 2025-09-19 11:53:22.875582 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.31s 2025-09-19 11:53:22.875592 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.30s 2025-09-19 11:53:22.875607 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.27s 2025-09-19 11:53:22.875617 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.14s 2025-09-19 11:53:22.875627 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.94s 2025-09-19 11:53:22.875642 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.72s 2025-09-19 11:53:22.875652 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.60s 2025-09-19 11:53:22.875661 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.57s 2025-09-19 11:53:22.875671 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.51s 2025-09-19 11:53:22.875681 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.28s 2025-09-19 11:53:22.875690 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.24s 2025-09-19 11:53:22.875700 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.19s 2025-09-19 11:53:22.875710 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.97s 2025-09-19 11:53:22.875719 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.94s 2025-09-19 11:53:22.875729 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.76s 2025-09-19 11:53:22.875738 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.75s 2025-09-19 11:53:22.875748 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.64s 2025-09-19 11:53:22.875757 | orchestrator | 2025-09-19 11:53:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:25.914252 | orchestrator | 2025-09-19 11:53:25 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:25.916081 | orchestrator | 2025-09-19 11:53:25 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:53:25.916129 | orchestrator | 2025-09-19 11:53:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:28.956687 | orchestrator | 2025-09-19 11:53:28 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:28.957875 | orchestrator | 2025-09-19 11:53:28 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:53:28.957984 | orchestrator | 2025-09-19 11:53:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:32.010822 | orchestrator | 2025-09-19 11:53:32 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:32.015147 | orchestrator | 2025-09-19 11:53:32 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state STARTED 2025-09-19 11:53:32.018649 | orchestrator | 2025-09-19 11:53:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:35.068020 | orchestrator | 2025-09-19 11:53:35 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:35.075319 | orchestrator | 2025-09-19 11:53:35 | INFO  | Task 45858f3b-830a-44d0-a328-3fbb977696c9 is in state SUCCESS 2025-09-19 11:53:35.075501 | orchestrator | 2025-09-19 11:53:35.080679 | orchestrator | 2025-09-19 11:53:35.080756 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:53:35.080772 | orchestrator | 2025-09-19 11:53:35.080784 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-19 11:53:35.080796 | orchestrator | Friday 19 September 2025 11:44:48 +0000 (0:00:00.260) 0:00:00.260 ****** 2025-09-19 11:53:35.080837 | orchestrator | changed: [testbed-manager] 2025-09-19 11:53:35.080850 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.080861 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:53:35.080872 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:53:35.080883 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:53:35.080894 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:53:35.080905 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:53:35.080916 | orchestrator | 2025-09-19 11:53:35.080927 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:53:35.080938 | orchestrator | Friday 19 September 2025 11:44:49 +0000 (0:00:00.766) 0:00:01.027 ****** 2025-09-19 11:53:35.080948 | orchestrator | changed: [testbed-manager] 2025-09-19 11:53:35.080959 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.080970 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:53:35.080981 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:53:35.080991 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:53:35.081002 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:53:35.081013 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:53:35.081023 | orchestrator | 2025-09-19 11:53:35.081034 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:53:35.081045 | orchestrator | Friday 19 September 2025 11:44:49 +0000 (0:00:00.661) 0:00:01.688 ****** 2025-09-19 11:53:35.081056 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-19 11:53:35.081067 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-19 11:53:35.081078 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-19 11:53:35.081089 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-19 11:53:35.081099 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-19 11:53:35.081110 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-19 11:53:35.081120 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-19 11:53:35.081131 | orchestrator | 2025-09-19 11:53:35.081142 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-19 11:53:35.081153 | orchestrator | 2025-09-19 11:53:35.081164 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-19 11:53:35.081175 | orchestrator | Friday 19 September 2025 11:44:50 +0000 (0:00:00.989) 0:00:02.678 ****** 2025-09-19 11:53:35.081199 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:53:35.081210 | orchestrator | 2025-09-19 11:53:35.081221 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-19 11:53:35.081232 | orchestrator | Friday 19 September 2025 11:44:51 +0000 (0:00:00.778) 0:00:03.456 ****** 2025-09-19 11:53:35.081243 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-19 11:53:35.081254 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-19 11:53:35.081265 | orchestrator | 2025-09-19 11:53:35.081276 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-19 11:53:35.081287 | orchestrator | Friday 19 September 2025 11:44:55 +0000 (0:00:04.294) 0:00:07.750 ****** 2025-09-19 11:53:35.081300 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:53:35.081313 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:53:35.081325 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.081337 | orchestrator | 2025-09-19 11:53:35.081372 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-19 11:53:35.081385 | orchestrator | Friday 19 September 2025 11:44:59 +0000 (0:00:04.217) 0:00:11.967 ****** 2025-09-19 11:53:35.081398 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.081410 | orchestrator | 2025-09-19 11:53:35.081423 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-19 11:53:35.081435 | orchestrator | Friday 19 September 2025 11:45:00 +0000 (0:00:00.704) 0:00:12.672 ****** 2025-09-19 11:53:35.081456 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.081468 | orchestrator | 2025-09-19 11:53:35.081481 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-19 11:53:35.081494 | orchestrator | Friday 19 September 2025 11:45:02 +0000 (0:00:01.479) 0:00:14.152 ****** 2025-09-19 11:53:35.081506 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.081518 | orchestrator | 2025-09-19 11:53:35.081531 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 11:53:35.081544 | orchestrator | Friday 19 September 2025 11:45:04 +0000 (0:00:02.640) 0:00:16.793 ****** 2025-09-19 11:53:35.081556 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.081569 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.081581 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.081594 | orchestrator | 2025-09-19 11:53:35.081607 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-19 11:53:35.081620 | orchestrator | Friday 19 September 2025 11:45:05 +0000 (0:00:00.472) 0:00:17.266 ****** 2025-09-19 11:53:35.081633 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:53:35.081646 | orchestrator | 2025-09-19 11:53:35.081659 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-19 11:53:35.081672 | orchestrator | Friday 19 September 2025 11:45:34 +0000 (0:00:29.463) 0:00:46.729 ****** 2025-09-19 11:53:35.081682 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.081694 | orchestrator | 2025-09-19 11:53:35.081705 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 11:53:35.081715 | orchestrator | Friday 19 September 2025 11:45:49 +0000 (0:00:14.794) 0:01:01.524 ****** 2025-09-19 11:53:35.081731 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:53:35.081749 | orchestrator | 2025-09-19 11:53:35.081766 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 11:53:35.081783 | orchestrator | Friday 19 September 2025 11:46:00 +0000 (0:00:10.970) 0:01:12.495 ****** 2025-09-19 11:53:35.081953 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:53:35.081982 | orchestrator | 2025-09-19 11:53:35.081994 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-19 11:53:35.082006 | orchestrator | Friday 19 September 2025 11:46:01 +0000 (0:00:00.879) 0:01:13.374 ****** 2025-09-19 11:53:35.082049 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.082063 | orchestrator | 2025-09-19 11:53:35.082074 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 11:53:35.082085 | orchestrator | Friday 19 September 2025 11:46:01 +0000 (0:00:00.404) 0:01:13.779 ****** 2025-09-19 11:53:35.082096 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:53:35.082107 | orchestrator | 2025-09-19 11:53:35.082118 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-19 11:53:35.082129 | orchestrator | Friday 19 September 2025 11:46:02 +0000 (0:00:00.454) 0:01:14.233 ****** 2025-09-19 11:53:35.082140 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:53:35.082151 | orchestrator | 2025-09-19 11:53:35.082162 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-19 11:53:35.082176 | orchestrator | Friday 19 September 2025 11:46:19 +0000 (0:00:16.973) 0:01:31.207 ****** 2025-09-19 11:53:35.082195 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.082211 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.082230 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.082249 | orchestrator | 2025-09-19 11:53:35.082268 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-19 11:53:35.082282 | orchestrator | 2025-09-19 11:53:35.082293 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-19 11:53:35.082304 | orchestrator | Friday 19 September 2025 11:46:19 +0000 (0:00:00.498) 0:01:31.705 ****** 2025-09-19 11:53:35.082315 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:53:35.082326 | orchestrator | 2025-09-19 11:53:35.082411 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-19 11:53:35.082425 | orchestrator | Friday 19 September 2025 11:46:20 +0000 (0:00:00.940) 0:01:32.646 ****** 2025-09-19 11:53:35.082436 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.082446 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.082457 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.082468 | orchestrator | 2025-09-19 11:53:35.082479 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-19 11:53:35.082490 | orchestrator | Friday 19 September 2025 11:46:22 +0000 (0:00:02.041) 0:01:34.687 ****** 2025-09-19 11:53:35.082501 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.082511 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.082531 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.082545 | orchestrator | 2025-09-19 11:53:35.082558 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-19 11:53:35.082571 | orchestrator | Friday 19 September 2025 11:46:24 +0000 (0:00:02.055) 0:01:36.743 ****** 2025-09-19 11:53:35.082583 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.082597 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.082609 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.082622 | orchestrator | 2025-09-19 11:53:35.082635 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-19 11:53:35.082648 | orchestrator | Friday 19 September 2025 11:46:25 +0000 (0:00:00.356) 0:01:37.100 ****** 2025-09-19 11:53:35.082660 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 11:53:35.082673 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.082685 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 11:53:35.082697 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.082711 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-19 11:53:35.082724 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-19 11:53:35.082736 | orchestrator | 2025-09-19 11:53:35.082749 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-19 11:53:35.082763 | orchestrator | Friday 19 September 2025 11:46:34 +0000 (0:00:08.998) 0:01:46.098 ****** 2025-09-19 11:53:35.082775 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.082788 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.082801 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.082813 | orchestrator | 2025-09-19 11:53:35.082826 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-19 11:53:35.082839 | orchestrator | Friday 19 September 2025 11:46:34 +0000 (0:00:00.401) 0:01:46.499 ****** 2025-09-19 11:53:35.082851 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 11:53:35.082864 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.082877 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 11:53:35.082888 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.082899 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 11:53:35.082910 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.082921 | orchestrator | 2025-09-19 11:53:35.082932 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-19 11:53:35.082942 | orchestrator | Friday 19 September 2025 11:46:36 +0000 (0:00:01.710) 0:01:48.210 ****** 2025-09-19 11:53:35.082952 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.082962 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.082972 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.082981 | orchestrator | 2025-09-19 11:53:35.082992 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-19 11:53:35.083001 | orchestrator | Friday 19 September 2025 11:46:37 +0000 (0:00:00.862) 0:01:49.073 ****** 2025-09-19 11:53:35.083011 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.083021 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.083030 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.083046 | orchestrator | 2025-09-19 11:53:35.083056 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-19 11:53:35.083066 | orchestrator | Friday 19 September 2025 11:46:38 +0000 (0:00:01.178) 0:01:50.252 ****** 2025-09-19 11:53:35.083075 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.083086 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.083193 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.083214 | orchestrator | 2025-09-19 11:53:35.083232 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-19 11:53:35.083248 | orchestrator | Friday 19 September 2025 11:46:41 +0000 (0:00:03.133) 0:01:53.385 ****** 2025-09-19 11:53:35.083264 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.083280 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.083297 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:53:35.083314 | orchestrator | 2025-09-19 11:53:35.083331 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 11:53:35.083363 | orchestrator | Friday 19 September 2025 11:47:00 +0000 (0:00:19.073) 0:02:12.459 ****** 2025-09-19 11:53:35.083374 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.083384 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.083394 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:53:35.083404 | orchestrator | 2025-09-19 11:53:35.083413 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 11:53:35.083423 | orchestrator | Friday 19 September 2025 11:47:12 +0000 (0:00:12.387) 0:02:24.846 ****** 2025-09-19 11:53:35.083432 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.083442 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:53:35.083452 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.083461 | orchestrator | 2025-09-19 11:53:35.083471 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-19 11:53:35.083480 | orchestrator | Friday 19 September 2025 11:47:13 +0000 (0:00:01.116) 0:02:25.963 ****** 2025-09-19 11:53:35.083490 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.083499 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.083509 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.083518 | orchestrator | 2025-09-19 11:53:35.083530 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-19 11:53:35.083547 | orchestrator | Friday 19 September 2025 11:47:25 +0000 (0:00:11.669) 0:02:37.632 ****** 2025-09-19 11:53:35.083559 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.083568 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.083578 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.083587 | orchestrator | 2025-09-19 11:53:35.083597 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-19 11:53:35.083664 | orchestrator | Friday 19 September 2025 11:47:26 +0000 (0:00:00.955) 0:02:38.588 ****** 2025-09-19 11:53:35.083704 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.083714 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.083724 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.083733 | orchestrator | 2025-09-19 11:53:35.083742 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-19 11:53:35.083752 | orchestrator | 2025-09-19 11:53:35.083762 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 11:53:35.083779 | orchestrator | Friday 19 September 2025 11:47:27 +0000 (0:00:00.459) 0:02:39.048 ****** 2025-09-19 11:53:35.083789 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:53:35.083801 | orchestrator | 2025-09-19 11:53:35.083812 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-19 11:53:35.083823 | orchestrator | Friday 19 September 2025 11:47:27 +0000 (0:00:00.559) 0:02:39.608 ****** 2025-09-19 11:53:35.083834 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-19 11:53:35.083845 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-19 11:53:35.083865 | orchestrator | 2025-09-19 11:53:35.083877 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-19 11:53:35.083888 | orchestrator | Friday 19 September 2025 11:47:31 +0000 (0:00:03.453) 0:02:43.061 ****** 2025-09-19 11:53:35.083899 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-19 11:53:35.083912 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-19 11:53:35.083923 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-19 11:53:35.083935 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-19 11:53:35.083946 | orchestrator | 2025-09-19 11:53:35.083958 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-19 11:53:35.083969 | orchestrator | Friday 19 September 2025 11:47:37 +0000 (0:00:06.511) 0:02:49.573 ****** 2025-09-19 11:53:35.083980 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:53:35.083991 | orchestrator | 2025-09-19 11:53:35.084001 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-19 11:53:35.084012 | orchestrator | Friday 19 September 2025 11:47:40 +0000 (0:00:03.328) 0:02:52.901 ****** 2025-09-19 11:53:35.084023 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:53:35.084035 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-19 11:53:35.084046 | orchestrator | 2025-09-19 11:53:35.084057 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-19 11:53:35.084068 | orchestrator | Friday 19 September 2025 11:47:45 +0000 (0:00:04.239) 0:02:57.141 ****** 2025-09-19 11:53:35.084079 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:53:35.084090 | orchestrator | 2025-09-19 11:53:35.084101 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-19 11:53:35.084112 | orchestrator | Friday 19 September 2025 11:47:48 +0000 (0:00:03.677) 0:03:00.819 ****** 2025-09-19 11:53:35.084123 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-19 11:53:35.084135 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-19 11:53:35.084146 | orchestrator | 2025-09-19 11:53:35.084157 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-19 11:53:35.084264 | orchestrator | Friday 19 September 2025 11:47:56 +0000 (0:00:07.872) 0:03:08.691 ****** 2025-09-19 11:53:35.084284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:53:35.084305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:53:35.084325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:53:35.084389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.084404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.084414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.084432 | orchestrator | 2025-09-19 11:53:35.084443 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-19 11:53:35.084453 | orchestrator | Friday 19 September 2025 11:47:58 +0000 (0:00:01.487) 0:03:10.178 ****** 2025-09-19 11:53:35.084463 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.084472 | orchestrator | 2025-09-19 11:53:35.084482 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-19 11:53:35.084492 | orchestrator | Friday 19 September 2025 11:47:58 +0000 (0:00:00.313) 0:03:10.491 ****** 2025-09-19 11:53:35.084502 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.084511 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.084526 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.084536 | orchestrator | 2025-09-19 11:53:35.084546 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-19 11:53:35.084555 | orchestrator | Friday 19 September 2025 11:47:59 +0000 (0:00:00.500) 0:03:10.991 ****** 2025-09-19 11:53:35.084565 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:53:35.084574 | orchestrator | 2025-09-19 11:53:35.084584 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-19 11:53:35.084593 | orchestrator | Friday 19 September 2025 11:47:59 +0000 (0:00:00.766) 0:03:11.758 ****** 2025-09-19 11:53:35.084603 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.084613 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.084622 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.084632 | orchestrator | 2025-09-19 11:53:35.084642 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 11:53:35.084651 | orchestrator | Friday 19 September 2025 11:48:00 +0000 (0:00:00.516) 0:03:12.274 ****** 2025-09-19 11:53:35.084662 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:53:35.084671 | orchestrator | 2025-09-19 11:53:35.084681 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-19 11:53:35.084690 | orchestrator | Friday 19 September 2025 11:48:00 +0000 (0:00:00.516) 0:03:12.791 ****** 2025-09-19 11:53:35.084701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:53:35.084743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:53:35.084764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.084780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:53:35.084791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.084848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.084867 | orchestrator | 2025-09-19 11:53:35.084895 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-19 11:53:35.084912 | orchestrator | Friday 19 September 2025 11:48:04 +0000 (0:00:03.752) 0:03:16.544 ****** 2025-09-19 11:53:35.084929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:53:35.084973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.084993 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.085012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:53:35.085030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.085046 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.085116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:53:35.085140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.085152 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.085163 | orchestrator | 2025-09-19 11:53:35.085179 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-19 11:53:35.085196 | orchestrator | Friday 19 September 2025 11:48:06 +0000 (0:00:01.942) 0:03:18.486 ****** 2025-09-19 11:53:35.085220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:53:35.085239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.085256 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.085312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:53:35.085336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.085430 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.085448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:53:35.085459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.085469 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.085480 | orchestrator | 2025-09-19 11:53:35.085490 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-19 11:53:35.085500 | orchestrator | Friday 19 September 2025 11:48:07 +0000 (0:00:00.713) 0:03:19.199 ****** 2025-09-19 11:53:35.085541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:53:35.085561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:53:35.085578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:53:35.085589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.085633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.085645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.085656 | orchestrator | 2025-09-19 11:53:35.085665 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-19 11:53:35.085675 | orchestrator | Friday 19 September 2025 11:48:10 +0000 (0:00:02.884) 0:03:22.084 ****** 2025-09-19 11:53:35.085690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:53:35.085701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:53:35.085741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:53:35.085759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.085769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.085784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.085795 | orchestrator | 2025-09-19 11:53:35.085805 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-19 11:53:35.085814 | orchestrator | Friday 19 September 2025 11:48:19 +0000 (0:00:09.028) 0:03:31.112 ****** 2025-09-19 11:53:35.085825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:53:35.085867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.085879 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.085889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:53:35.085904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.085914 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.085924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:53:35.085941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.085951 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.085961 | orchestrator | 2025-09-19 11:53:35.085970 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-19 11:53:35.085980 | orchestrator | Friday 19 September 2025 11:48:20 +0000 (0:00:01.549) 0:03:32.662 ****** 2025-09-19 11:53:35.085990 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.086000 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:53:35.086009 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:53:35.086064 | orchestrator | 2025-09-19 11:53:35.086123 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-19 11:53:35.086142 | orchestrator | Friday 19 September 2025 11:48:23 +0000 (0:00:02.589) 0:03:35.251 ****** 2025-09-19 11:53:35.086158 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.086175 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.086194 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.086210 | orchestrator | 2025-09-19 11:53:35.086227 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-19 11:53:35.086237 | orchestrator | Friday 19 September 2025 11:48:23 +0000 (0:00:00.643) 0:03:35.897 ****** 2025-09-19 11:53:35.086248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:53:35.086270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:53:35.086329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:53:35.086342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.086411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.086422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.086432 | orchestrator | 2025-09-19 11:53:35.086448 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 11:53:35.086458 | orchestrator | Friday 19 September 2025 11:48:26 +0000 (0:00:02.469) 0:03:38.366 ****** 2025-09-19 11:53:35.086468 | orchestrator | 2025-09-19 11:53:35.086478 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 11:53:35.086487 | orchestrator | Friday 19 September 2025 11:48:26 +0000 (0:00:00.343) 0:03:38.709 ****** 2025-09-19 11:53:35.086497 | orchestrator | 2025-09-19 11:53:35.086513 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 11:53:35.086521 | orchestrator | Friday 19 September 2025 11:48:26 +0000 (0:00:00.240) 0:03:38.950 ****** 2025-09-19 11:53:35.086529 | orchestrator | 2025-09-19 11:53:35.086536 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-19 11:53:35.086544 | orchestrator | Friday 19 September 2025 11:48:27 +0000 (0:00:00.282) 0:03:39.233 ****** 2025-09-19 11:53:35.086552 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.086560 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:53:35.086568 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:53:35.086576 | orchestrator | 2025-09-19 11:53:35.086584 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-19 11:53:35.086592 | orchestrator | Friday 19 September 2025 11:48:51 +0000 (0:00:24.636) 0:04:03.869 ****** 2025-09-19 11:53:35.086600 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.086607 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:53:35.086615 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:53:35.086623 | orchestrator | 2025-09-19 11:53:35.086631 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-19 11:53:35.086639 | orchestrator | 2025-09-19 11:53:35.086647 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 11:53:35.086654 | orchestrator | Friday 19 September 2025 11:49:04 +0000 (0:00:13.083) 0:04:16.952 ****** 2025-09-19 11:53:35.086663 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:53:35.086672 | orchestrator | 2025-09-19 11:53:35.086680 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 11:53:35.086687 | orchestrator | Friday 19 September 2025 11:49:06 +0000 (0:00:01.687) 0:04:18.639 ****** 2025-09-19 11:53:35.086695 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.086703 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.086711 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.086719 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.086727 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.086734 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.086742 | orchestrator | 2025-09-19 11:53:35.086750 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-19 11:53:35.086758 | orchestrator | Friday 19 September 2025 11:49:07 +0000 (0:00:00.775) 0:04:19.415 ****** 2025-09-19 11:53:35.086766 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.086774 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.086782 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.086790 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:53:35.086798 | orchestrator | 2025-09-19 11:53:35.086806 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 11:53:35.086841 | orchestrator | Friday 19 September 2025 11:49:08 +0000 (0:00:01.351) 0:04:20.767 ****** 2025-09-19 11:53:35.086850 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-19 11:53:35.086859 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-19 11:53:35.086867 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-19 11:53:35.086874 | orchestrator | 2025-09-19 11:53:35.086883 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 11:53:35.086890 | orchestrator | Friday 19 September 2025 11:49:09 +0000 (0:00:00.746) 0:04:21.513 ****** 2025-09-19 11:53:35.086898 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-19 11:53:35.086906 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-19 11:53:35.086914 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-19 11:53:35.086921 | orchestrator | 2025-09-19 11:53:35.086929 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 11:53:35.086937 | orchestrator | Friday 19 September 2025 11:49:11 +0000 (0:00:01.481) 0:04:22.994 ****** 2025-09-19 11:53:35.086954 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-19 11:53:35.086962 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.086970 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-19 11:53:35.086977 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.086985 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-19 11:53:35.086993 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.087001 | orchestrator | 2025-09-19 11:53:35.087009 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-19 11:53:35.087017 | orchestrator | Friday 19 September 2025 11:49:12 +0000 (0:00:01.347) 0:04:24.342 ****** 2025-09-19 11:53:35.087024 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 11:53:35.087032 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 11:53:35.087040 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:53:35.087048 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:53:35.087056 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 11:53:35.087063 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.087071 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:53:35.087079 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:53:35.087087 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.087099 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:53:35.087107 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 11:53:35.087115 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 11:53:35.087123 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:53:35.087131 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.087138 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 11:53:35.087146 | orchestrator | 2025-09-19 11:53:35.087154 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-19 11:53:35.087162 | orchestrator | Friday 19 September 2025 11:49:13 +0000 (0:00:01.580) 0:04:25.922 ****** 2025-09-19 11:53:35.087170 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.087178 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.087185 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.087193 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:53:35.087201 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:53:35.087208 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:53:35.087216 | orchestrator | 2025-09-19 11:53:35.087224 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-19 11:53:35.087232 | orchestrator | Friday 19 September 2025 11:49:15 +0000 (0:00:01.546) 0:04:27.468 ****** 2025-09-19 11:53:35.087240 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.087248 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.087255 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.087263 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:53:35.087271 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:53:35.087279 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:53:35.087286 | orchestrator | 2025-09-19 11:53:35.087294 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-19 11:53:35.087302 | orchestrator | Friday 19 September 2025 11:49:17 +0000 (0:00:01.863) 0:04:29.332 ****** 2025-09-19 11:53:35.087311 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087363 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087375 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087389 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087454 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087462 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087505 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087535 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087553 | orchestrator | 2025-09-19 11:53:35.087561 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 11:53:35.087569 | orchestrator | Friday 19 September 2025 11:49:20 +0000 (0:00:03.001) 0:04:32.333 ****** 2025-09-19 11:53:35.087577 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:53:35.087585 | orchestrator | 2025-09-19 11:53:35.087593 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-19 11:53:35.087601 | orchestrator | Friday 19 September 2025 11:49:21 +0000 (0:00:01.129) 0:04:33.463 ****** 2025-09-19 11:53:35.087613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087622 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087658 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087688 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087767 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087783 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087804 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.087813 | orchestrator | 2025-09-19 11:53:35.087820 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-19 11:53:35.087828 | orchestrator | Friday 19 September 2025 11:49:25 +0000 (0:00:03.816) 0:04:37.279 ****** 2025-09-19 11:53:35.087858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:53:35.087868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:53:35.087880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.087888 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.087897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:53:35.087911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:53:35.087940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.087950 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.087958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:53:35.087966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:53:35.087981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.087994 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.088003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:53:35.088011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.088019 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.088050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:53:35.088060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.088068 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.088076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:53:35.088088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.088101 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.088110 | orchestrator | 2025-09-19 11:53:35.088118 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-19 11:53:35.088126 | orchestrator | Friday 19 September 2025 11:49:27 +0000 (0:00:02.335) 0:04:39.615 ****** 2025-09-19 11:53:35.088134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:53:35.088143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:53:35.088173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.088183 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.088191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:53:35.088199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.088212 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.088223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:53:35.088232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:53:35.088240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:53:35.088269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:53:35.088279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.088297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.088306 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.088314 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.088322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:53:35.088330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.088338 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.088361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:53:35.088392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.088401 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.088409 | orchestrator | 2025-09-19 11:53:35.088417 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 11:53:35.088425 | orchestrator | Friday 19 September 2025 11:49:30 +0000 (0:00:02.446) 0:04:42.061 ****** 2025-09-19 11:53:35.088434 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.088442 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.088449 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.088463 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:53:35.088471 | orchestrator | 2025-09-19 11:53:35.088479 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-19 11:53:35.088487 | orchestrator | Friday 19 September 2025 11:49:30 +0000 (0:00:00.879) 0:04:42.940 ****** 2025-09-19 11:53:35.088495 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 11:53:35.088503 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 11:53:35.088511 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 11:53:35.088519 | orchestrator | 2025-09-19 11:53:35.088527 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-19 11:53:35.088535 | orchestrator | Friday 19 September 2025 11:49:31 +0000 (0:00:00.883) 0:04:43.824 ****** 2025-09-19 11:53:35.088544 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 11:53:35.088552 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 11:53:35.088560 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 11:53:35.088568 | orchestrator | 2025-09-19 11:53:35.088575 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-19 11:53:35.088583 | orchestrator | Friday 19 September 2025 11:49:32 +0000 (0:00:00.816) 0:04:44.641 ****** 2025-09-19 11:53:35.088592 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:53:35.088600 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:53:35.088607 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:53:35.088615 | orchestrator | 2025-09-19 11:53:35.088627 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-19 11:53:35.088635 | orchestrator | Friday 19 September 2025 11:49:33 +0000 (0:00:00.468) 0:04:45.109 ****** 2025-09-19 11:53:35.088643 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:53:35.088651 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:53:35.088659 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:53:35.088667 | orchestrator | 2025-09-19 11:53:35.088675 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-19 11:53:35.088684 | orchestrator | Friday 19 September 2025 11:49:33 +0000 (0:00:00.619) 0:04:45.729 ****** 2025-09-19 11:53:35.088692 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 11:53:35.088700 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 11:53:35.088708 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 11:53:35.088716 | orchestrator | 2025-09-19 11:53:35.088724 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-19 11:53:35.088731 | orchestrator | Friday 19 September 2025 11:49:34 +0000 (0:00:01.220) 0:04:46.949 ****** 2025-09-19 11:53:35.088739 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 11:53:35.088747 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 11:53:35.088755 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 11:53:35.088763 | orchestrator | 2025-09-19 11:53:35.088771 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-19 11:53:35.088779 | orchestrator | Friday 19 September 2025 11:49:36 +0000 (0:00:01.224) 0:04:48.173 ****** 2025-09-19 11:53:35.088786 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 11:53:35.088794 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 11:53:35.088802 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 11:53:35.088810 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-19 11:53:35.088818 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-19 11:53:35.088826 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-19 11:53:35.088833 | orchestrator | 2025-09-19 11:53:35.088841 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-19 11:53:35.088849 | orchestrator | Friday 19 September 2025 11:49:40 +0000 (0:00:03.999) 0:04:52.172 ****** 2025-09-19 11:53:35.088857 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.088873 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.088881 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.088889 | orchestrator | 2025-09-19 11:53:35.088896 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-19 11:53:35.088904 | orchestrator | Friday 19 September 2025 11:49:40 +0000 (0:00:00.571) 0:04:52.744 ****** 2025-09-19 11:53:35.088912 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.088920 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.088928 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.088936 | orchestrator | 2025-09-19 11:53:35.088943 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-19 11:53:35.088952 | orchestrator | Friday 19 September 2025 11:49:41 +0000 (0:00:00.358) 0:04:53.103 ****** 2025-09-19 11:53:35.088960 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:53:35.088968 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:53:35.088976 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:53:35.088984 | orchestrator | 2025-09-19 11:53:35.089016 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-19 11:53:35.089025 | orchestrator | Friday 19 September 2025 11:49:42 +0000 (0:00:01.818) 0:04:54.921 ****** 2025-09-19 11:53:35.089034 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 11:53:35.089043 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 11:53:35.089051 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 11:53:35.089059 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 11:53:35.089067 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 11:53:35.089075 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 11:53:35.089083 | orchestrator | 2025-09-19 11:53:35.089091 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-19 11:53:35.089099 | orchestrator | Friday 19 September 2025 11:49:46 +0000 (0:00:03.496) 0:04:58.417 ****** 2025-09-19 11:53:35.089107 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 11:53:35.089115 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 11:53:35.089123 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 11:53:35.089130 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 11:53:35.089138 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:53:35.089146 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 11:53:35.089154 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:53:35.089161 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 11:53:35.089169 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:53:35.089177 | orchestrator | 2025-09-19 11:53:35.089185 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-19 11:53:35.089192 | orchestrator | Friday 19 September 2025 11:49:50 +0000 (0:00:04.113) 0:05:02.531 ****** 2025-09-19 11:53:35.089200 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.089208 | orchestrator | 2025-09-19 11:53:35.089222 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-19 11:53:35.089230 | orchestrator | Friday 19 September 2025 11:49:50 +0000 (0:00:00.137) 0:05:02.669 ****** 2025-09-19 11:53:35.089239 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.089247 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.089254 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.089262 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.089276 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.089284 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.089291 | orchestrator | 2025-09-19 11:53:35.089299 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-19 11:53:35.089307 | orchestrator | Friday 19 September 2025 11:49:51 +0000 (0:00:00.568) 0:05:03.237 ****** 2025-09-19 11:53:35.089315 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 11:53:35.089323 | orchestrator | 2025-09-19 11:53:35.089331 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-19 11:53:35.089339 | orchestrator | Friday 19 September 2025 11:49:51 +0000 (0:00:00.683) 0:05:03.921 ****** 2025-09-19 11:53:35.089388 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.089398 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.089406 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.089414 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.089422 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.089429 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.089437 | orchestrator | 2025-09-19 11:53:35.089445 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-19 11:53:35.089453 | orchestrator | Friday 19 September 2025 11:49:52 +0000 (0:00:00.850) 0:05:04.771 ****** 2025-09-19 11:53:35.089460 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089473 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089491 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089519 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089532 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089539 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089568 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089593 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089600 | orchestrator | 2025-09-19 11:53:35.089607 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-19 11:53:35.089614 | orchestrator | Friday 19 September 2025 11:49:57 +0000 (0:00:04.666) 0:05:09.438 ****** 2025-09-19 11:53:35.089629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:53:35.089637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:53:35.089644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:53:35.089654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:53:35.089661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:53:35.089668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:53:35.089683 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089691 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089698 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.089758 | orchestrator | 2025-09-19 11:53:35.089765 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-19 11:53:35.089771 | orchestrator | Friday 19 September 2025 11:50:03 +0000 (0:00:06.481) 0:05:15.920 ****** 2025-09-19 11:53:35.089778 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.089785 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.089791 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.089798 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.089805 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.089811 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.089818 | orchestrator | 2025-09-19 11:53:35.089825 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-19 11:53:35.089831 | orchestrator | Friday 19 September 2025 11:50:05 +0000 (0:00:01.697) 0:05:17.617 ****** 2025-09-19 11:53:35.089838 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 11:53:35.089845 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 11:53:35.089851 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 11:53:35.089858 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 11:53:35.089868 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 11:53:35.089874 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 11:53:35.089886 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 11:53:35.089892 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.089899 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 11:53:35.089906 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.089913 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 11:53:35.089919 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.089926 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 11:53:35.089933 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 11:53:35.089939 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 11:53:35.089946 | orchestrator | 2025-09-19 11:53:35.089953 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-19 11:53:35.089959 | orchestrator | Friday 19 September 2025 11:50:11 +0000 (0:00:05.557) 0:05:23.174 ****** 2025-09-19 11:53:35.089966 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.089973 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.089979 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.089986 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.089992 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.089999 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.090006 | orchestrator | 2025-09-19 11:53:35.090035 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-19 11:53:35.090043 | orchestrator | Friday 19 September 2025 11:50:11 +0000 (0:00:00.573) 0:05:23.748 ****** 2025-09-19 11:53:35.090050 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 11:53:35.090057 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 11:53:35.090064 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 11:53:35.090074 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 11:53:35.090081 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 11:53:35.090088 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 11:53:35.090094 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 11:53:35.090101 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 11:53:35.090108 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 11:53:35.090115 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.090121 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 11:53:35.090128 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.090134 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 11:53:35.090141 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 11:53:35.090148 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 11:53:35.090154 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 11:53:35.090169 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 11:53:35.090175 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.090182 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 11:53:35.090189 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 11:53:35.090195 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 11:53:35.090202 | orchestrator | 2025-09-19 11:53:35.090208 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-19 11:53:35.090215 | orchestrator | Friday 19 September 2025 11:50:18 +0000 (0:00:07.089) 0:05:30.838 ****** 2025-09-19 11:53:35.090222 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 11:53:35.090229 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 11:53:35.090240 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 11:53:35.090246 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 11:53:35.090253 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 11:53:35.090259 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 11:53:35.090266 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 11:53:35.090273 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 11:53:35.090279 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 11:53:35.090286 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 11:53:35.090293 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 11:53:35.090299 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 11:53:35.090306 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 11:53:35.090312 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.090319 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 11:53:35.090326 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.090333 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 11:53:35.090339 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.090402 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 11:53:35.090409 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 11:53:35.090416 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 11:53:35.090422 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 11:53:35.090430 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 11:53:35.090436 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 11:53:35.090449 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 11:53:35.090456 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 11:53:35.090462 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 11:53:35.090474 | orchestrator | 2025-09-19 11:53:35.090481 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-19 11:53:35.090487 | orchestrator | Friday 19 September 2025 11:50:25 +0000 (0:00:06.899) 0:05:37.737 ****** 2025-09-19 11:53:35.090494 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.090501 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.090508 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.090514 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.090521 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.090528 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.090535 | orchestrator | 2025-09-19 11:53:35.090542 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-19 11:53:35.090549 | orchestrator | Friday 19 September 2025 11:50:26 +0000 (0:00:00.919) 0:05:38.657 ****** 2025-09-19 11:53:35.090555 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.090562 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.090569 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.090576 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.090582 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.090589 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.090596 | orchestrator | 2025-09-19 11:53:35.090602 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-19 11:53:35.090609 | orchestrator | Friday 19 September 2025 11:50:27 +0000 (0:00:00.618) 0:05:39.276 ****** 2025-09-19 11:53:35.090616 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.090622 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.090629 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:53:35.090636 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.090642 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:53:35.090649 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:53:35.090656 | orchestrator | 2025-09-19 11:53:35.090662 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-19 11:53:35.090669 | orchestrator | Friday 19 September 2025 11:50:29 +0000 (0:00:02.426) 0:05:41.703 ****** 2025-09-19 11:53:35.090681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:53:35.090688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:53:35.090695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.090706 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.090717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:53:35.090724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:53:35.090731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.090738 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.090749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:53:35.090756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:53:35.090771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.090779 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.090786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:53:35.090793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.090800 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.090807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:53:35.090818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:53:35.090825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.090837 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.090844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:53:35.090851 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.090857 | orchestrator | 2025-09-19 11:53:35.090864 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-19 11:53:35.090871 | orchestrator | Friday 19 September 2025 11:50:31 +0000 (0:00:01.349) 0:05:43.052 ****** 2025-09-19 11:53:35.090882 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-19 11:53:35.090889 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-19 11:53:35.090896 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.090902 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-19 11:53:35.090909 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-19 11:53:35.090916 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.090923 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-19 11:53:35.090929 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-19 11:53:35.090936 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.090943 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-19 11:53:35.090950 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-19 11:53:35.090956 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.090963 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-19 11:53:35.090970 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-19 11:53:35.090976 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.090983 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-19 11:53:35.090990 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-19 11:53:35.090996 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.091003 | orchestrator | 2025-09-19 11:53:35.091011 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-19 11:53:35.091017 | orchestrator | Friday 19 September 2025 11:50:31 +0000 (0:00:00.857) 0:05:43.910 ****** 2025-09-19 11:53:35.091024 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:53:35.091036 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:53:35.091048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:53:35.091059 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:53:35.091066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:53:35.091073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:53:35.091080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:53:35.091096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:53:35.091103 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:53:35.091111 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.091121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.091128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.091135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.091145 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.091157 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:53:35.091164 | orchestrator | 2025-09-19 11:53:35.091171 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 11:53:35.091178 | orchestrator | Friday 19 September 2025 11:50:35 +0000 (0:00:03.078) 0:05:46.989 ****** 2025-09-19 11:53:35.091184 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.091191 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.091198 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.091205 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.091211 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.091218 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.091225 | orchestrator | 2025-09-19 11:53:35.091231 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 11:53:35.091238 | orchestrator | Friday 19 September 2025 11:50:36 +0000 (0:00:01.144) 0:05:48.133 ****** 2025-09-19 11:53:35.091244 | orchestrator | 2025-09-19 11:53:35.091251 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 11:53:35.091258 | orchestrator | Friday 19 September 2025 11:50:36 +0000 (0:00:00.352) 0:05:48.485 ****** 2025-09-19 11:53:35.091264 | orchestrator | 2025-09-19 11:53:35.091271 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 11:53:35.091281 | orchestrator | Friday 19 September 2025 11:50:36 +0000 (0:00:00.296) 0:05:48.782 ****** 2025-09-19 11:53:35.091288 | orchestrator | 2025-09-19 11:53:35.091295 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 11:53:35.091302 | orchestrator | Friday 19 September 2025 11:50:37 +0000 (0:00:00.340) 0:05:49.122 ****** 2025-09-19 11:53:35.091308 | orchestrator | 2025-09-19 11:53:35.091315 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 11:53:35.091322 | orchestrator | Friday 19 September 2025 11:50:37 +0000 (0:00:00.339) 0:05:49.461 ****** 2025-09-19 11:53:35.091328 | orchestrator | 2025-09-19 11:53:35.091335 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 11:53:35.091342 | orchestrator | Friday 19 September 2025 11:50:37 +0000 (0:00:00.351) 0:05:49.813 ****** 2025-09-19 11:53:35.091362 | orchestrator | 2025-09-19 11:53:35.091369 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-19 11:53:35.091376 | orchestrator | Friday 19 September 2025 11:50:38 +0000 (0:00:00.409) 0:05:50.222 ****** 2025-09-19 11:53:35.091383 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:53:35.091389 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:53:35.091396 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.091408 | orchestrator | 2025-09-19 11:53:35.091415 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-19 11:53:35.091421 | orchestrator | Friday 19 September 2025 11:50:48 +0000 (0:00:10.742) 0:06:00.965 ****** 2025-09-19 11:53:35.091428 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.091435 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:53:35.091441 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:53:35.091448 | orchestrator | 2025-09-19 11:53:35.091454 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-19 11:53:35.091461 | orchestrator | Friday 19 September 2025 11:51:05 +0000 (0:00:16.037) 0:06:17.002 ****** 2025-09-19 11:53:35.091467 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:53:35.091474 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:53:35.091481 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:53:35.091487 | orchestrator | 2025-09-19 11:53:35.091494 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-19 11:53:35.091500 | orchestrator | Friday 19 September 2025 11:51:29 +0000 (0:00:24.002) 0:06:41.005 ****** 2025-09-19 11:53:35.091507 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:53:35.091513 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:53:35.091520 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:53:35.091527 | orchestrator | 2025-09-19 11:53:35.091533 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-19 11:53:35.091540 | orchestrator | Friday 19 September 2025 11:52:03 +0000 (0:00:34.688) 0:07:15.693 ****** 2025-09-19 11:53:35.091546 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:53:35.091553 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:53:35.091560 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:53:35.091566 | orchestrator | 2025-09-19 11:53:35.091573 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-19 11:53:35.091580 | orchestrator | Friday 19 September 2025 11:52:04 +0000 (0:00:00.861) 0:07:16.554 ****** 2025-09-19 11:53:35.091586 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:53:35.091593 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:53:35.091600 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:53:35.091606 | orchestrator | 2025-09-19 11:53:35.091613 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-19 11:53:35.091623 | orchestrator | Friday 19 September 2025 11:52:05 +0000 (0:00:00.862) 0:07:17.416 ****** 2025-09-19 11:53:35.091630 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:53:35.091637 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:53:35.091643 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:53:35.091650 | orchestrator | 2025-09-19 11:53:35.091657 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-19 11:53:35.091664 | orchestrator | Friday 19 September 2025 11:52:25 +0000 (0:00:19.693) 0:07:37.110 ****** 2025-09-19 11:53:35.091670 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.091677 | orchestrator | 2025-09-19 11:53:35.091684 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-19 11:53:35.091691 | orchestrator | Friday 19 September 2025 11:52:25 +0000 (0:00:00.151) 0:07:37.262 ****** 2025-09-19 11:53:35.091697 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.091704 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.091711 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.091718 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.091724 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.091731 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-19 11:53:35.091738 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:53:35.091745 | orchestrator | 2025-09-19 11:53:35.091751 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-19 11:53:35.091758 | orchestrator | Friday 19 September 2025 11:52:47 +0000 (0:00:22.478) 0:07:59.740 ****** 2025-09-19 11:53:35.091773 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.091780 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.091787 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.091793 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.091800 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.091806 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.091813 | orchestrator | 2025-09-19 11:53:35.091820 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-19 11:53:35.091826 | orchestrator | Friday 19 September 2025 11:52:57 +0000 (0:00:09.326) 0:08:09.067 ****** 2025-09-19 11:53:35.091833 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.091840 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.091847 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.091853 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.091860 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.091867 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-09-19 11:53:35.091873 | orchestrator | 2025-09-19 11:53:35.091883 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 11:53:35.091890 | orchestrator | Friday 19 September 2025 11:53:00 +0000 (0:00:03.715) 0:08:12.782 ****** 2025-09-19 11:53:35.091897 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:53:35.091903 | orchestrator | 2025-09-19 11:53:35.091910 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 11:53:35.091917 | orchestrator | Friday 19 September 2025 11:53:13 +0000 (0:00:12.723) 0:08:25.506 ****** 2025-09-19 11:53:35.091923 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:53:35.091930 | orchestrator | 2025-09-19 11:53:35.091937 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-19 11:53:35.091943 | orchestrator | Friday 19 September 2025 11:53:14 +0000 (0:00:01.378) 0:08:26.885 ****** 2025-09-19 11:53:35.091950 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.091957 | orchestrator | 2025-09-19 11:53:35.091963 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-19 11:53:35.091970 | orchestrator | Friday 19 September 2025 11:53:16 +0000 (0:00:01.329) 0:08:28.215 ****** 2025-09-19 11:53:35.091976 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:53:35.091983 | orchestrator | 2025-09-19 11:53:35.091990 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-19 11:53:35.091997 | orchestrator | Friday 19 September 2025 11:53:27 +0000 (0:00:11.121) 0:08:39.337 ****** 2025-09-19 11:53:35.092003 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:53:35.092010 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:53:35.092017 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:53:35.092024 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:53:35.092030 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:53:35.092037 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:53:35.092044 | orchestrator | 2025-09-19 11:53:35.092050 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-19 11:53:35.092057 | orchestrator | 2025-09-19 11:53:35.092063 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-19 11:53:35.092070 | orchestrator | Friday 19 September 2025 11:53:29 +0000 (0:00:01.865) 0:08:41.202 ****** 2025-09-19 11:53:35.092077 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:53:35.092084 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:53:35.092090 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:53:35.092097 | orchestrator | 2025-09-19 11:53:35.092103 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-19 11:53:35.092110 | orchestrator | 2025-09-19 11:53:35.092117 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-19 11:53:35.092124 | orchestrator | Friday 19 September 2025 11:53:30 +0000 (0:00:01.283) 0:08:42.486 ****** 2025-09-19 11:53:35.092134 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.092141 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.092148 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.092155 | orchestrator | 2025-09-19 11:53:35.092161 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-19 11:53:35.092168 | orchestrator | 2025-09-19 11:53:35.092175 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-19 11:53:35.092181 | orchestrator | Friday 19 September 2025 11:53:31 +0000 (0:00:00.546) 0:08:43.032 ****** 2025-09-19 11:53:35.092188 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-19 11:53:35.092199 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-19 11:53:35.092206 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-19 11:53:35.092213 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-19 11:53:35.092220 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-19 11:53:35.092227 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-19 11:53:35.092234 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:53:35.092240 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-19 11:53:35.092247 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-19 11:53:35.092254 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-19 11:53:35.092260 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-19 11:53:35.092268 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-19 11:53:35.092274 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-19 11:53:35.092281 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:53:35.092288 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-19 11:53:35.092295 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-19 11:53:35.092301 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-19 11:53:35.092308 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-19 11:53:35.092315 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-19 11:53:35.092322 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-19 11:53:35.092328 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:53:35.092335 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-19 11:53:35.092342 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-19 11:53:35.092361 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-19 11:53:35.092368 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-19 11:53:35.092374 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-19 11:53:35.092381 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-19 11:53:35.092388 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.092394 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-19 11:53:35.092401 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-19 11:53:35.092408 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-19 11:53:35.092418 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-19 11:53:35.092425 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-19 11:53:35.092431 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-19 11:53:35.092438 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.092445 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-19 11:53:35.092452 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-19 11:53:35.092458 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-19 11:53:35.092465 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-19 11:53:35.092477 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-19 11:53:35.092483 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-19 11:53:35.092490 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.092497 | orchestrator | 2025-09-19 11:53:35.092504 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-19 11:53:35.092511 | orchestrator | 2025-09-19 11:53:35.092517 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-19 11:53:35.092524 | orchestrator | Friday 19 September 2025 11:53:32 +0000 (0:00:01.400) 0:08:44.433 ****** 2025-09-19 11:53:35.092531 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-19 11:53:35.092538 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-19 11:53:35.092544 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.092551 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-19 11:53:35.092558 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-19 11:53:35.092564 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.092571 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-19 11:53:35.092577 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-19 11:53:35.092584 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.092591 | orchestrator | 2025-09-19 11:53:35.092597 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-19 11:53:35.092604 | orchestrator | 2025-09-19 11:53:35.092611 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-19 11:53:35.092617 | orchestrator | Friday 19 September 2025 11:53:33 +0000 (0:00:00.781) 0:08:45.214 ****** 2025-09-19 11:53:35.092624 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.092630 | orchestrator | 2025-09-19 11:53:35.092637 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-19 11:53:35.092644 | orchestrator | 2025-09-19 11:53:35.092650 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-19 11:53:35.092657 | orchestrator | Friday 19 September 2025 11:53:33 +0000 (0:00:00.685) 0:08:45.900 ****** 2025-09-19 11:53:35.092663 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:53:35.092670 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:53:35.092676 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:53:35.092683 | orchestrator | 2025-09-19 11:53:35.092690 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:53:35.092696 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:53:35.092707 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-19 11:53:35.092715 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-19 11:53:35.092722 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-19 11:53:35.092728 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-19 11:53:35.092735 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-19 11:53:35.092742 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-19 11:53:35.092748 | orchestrator | 2025-09-19 11:53:35.092755 | orchestrator | 2025-09-19 11:53:35.092762 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:53:35.092774 | orchestrator | Friday 19 September 2025 11:53:34 +0000 (0:00:00.482) 0:08:46.383 ****** 2025-09-19 11:53:35.092780 | orchestrator | =============================================================================== 2025-09-19 11:53:35.092787 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 34.69s 2025-09-19 11:53:35.092794 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.46s 2025-09-19 11:53:35.092800 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.64s 2025-09-19 11:53:35.092807 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.00s 2025-09-19 11:53:35.092814 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.48s 2025-09-19 11:53:35.092820 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 19.69s 2025-09-19 11:53:35.092827 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.07s 2025-09-19 11:53:35.092837 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.97s 2025-09-19 11:53:35.092844 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.04s 2025-09-19 11:53:35.092851 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.79s 2025-09-19 11:53:35.092857 | orchestrator | nova : Restart nova-api container -------------------------------------- 13.08s 2025-09-19 11:53:35.092864 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.72s 2025-09-19 11:53:35.092871 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.39s 2025-09-19 11:53:35.092877 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.67s 2025-09-19 11:53:35.092884 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.12s 2025-09-19 11:53:35.092891 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.97s 2025-09-19 11:53:35.092897 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 10.74s 2025-09-19 11:53:35.092904 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.33s 2025-09-19 11:53:35.092911 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.03s 2025-09-19 11:53:35.092918 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.00s 2025-09-19 11:53:35.092924 | orchestrator | 2025-09-19 11:53:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:38.118547 | orchestrator | 2025-09-19 11:53:38 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:38.118663 | orchestrator | 2025-09-19 11:53:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:41.161194 | orchestrator | 2025-09-19 11:53:41 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:41.161306 | orchestrator | 2025-09-19 11:53:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:44.206981 | orchestrator | 2025-09-19 11:53:44 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:44.207077 | orchestrator | 2025-09-19 11:53:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:47.252248 | orchestrator | 2025-09-19 11:53:47 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:47.252407 | orchestrator | 2025-09-19 11:53:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:50.300406 | orchestrator | 2025-09-19 11:53:50 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:50.300534 | orchestrator | 2025-09-19 11:53:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:53.342642 | orchestrator | 2025-09-19 11:53:53 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:53.342738 | orchestrator | 2025-09-19 11:53:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:56.391143 | orchestrator | 2025-09-19 11:53:56 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:56.391241 | orchestrator | 2025-09-19 11:53:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:53:59.451602 | orchestrator | 2025-09-19 11:53:59 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:53:59.451707 | orchestrator | 2025-09-19 11:53:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:02.504965 | orchestrator | 2025-09-19 11:54:02 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:02.506257 | orchestrator | 2025-09-19 11:54:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:05.555675 | orchestrator | 2025-09-19 11:54:05 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:05.555775 | orchestrator | 2025-09-19 11:54:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:08.595906 | orchestrator | 2025-09-19 11:54:08 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:08.596006 | orchestrator | 2025-09-19 11:54:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:11.638521 | orchestrator | 2025-09-19 11:54:11 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:11.638619 | orchestrator | 2025-09-19 11:54:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:14.679859 | orchestrator | 2025-09-19 11:54:14 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:14.679987 | orchestrator | 2025-09-19 11:54:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:17.728344 | orchestrator | 2025-09-19 11:54:17 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:17.728440 | orchestrator | 2025-09-19 11:54:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:20.813394 | orchestrator | 2025-09-19 11:54:20 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:20.813483 | orchestrator | 2025-09-19 11:54:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:23.885645 | orchestrator | 2025-09-19 11:54:23 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:23.885770 | orchestrator | 2025-09-19 11:54:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:26.937333 | orchestrator | 2025-09-19 11:54:26 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:26.937431 | orchestrator | 2025-09-19 11:54:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:29.977692 | orchestrator | 2025-09-19 11:54:29 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:29.977787 | orchestrator | 2025-09-19 11:54:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:33.042889 | orchestrator | 2025-09-19 11:54:33 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:33.043200 | orchestrator | 2025-09-19 11:54:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:36.087057 | orchestrator | 2025-09-19 11:54:36 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:36.087189 | orchestrator | 2025-09-19 11:54:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:39.125756 | orchestrator | 2025-09-19 11:54:39 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:39.125916 | orchestrator | 2025-09-19 11:54:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:42.168158 | orchestrator | 2025-09-19 11:54:42 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:42.168281 | orchestrator | 2025-09-19 11:54:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:45.209737 | orchestrator | 2025-09-19 11:54:45 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:45.209858 | orchestrator | 2025-09-19 11:54:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:48.253679 | orchestrator | 2025-09-19 11:54:48 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:48.253780 | orchestrator | 2025-09-19 11:54:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:51.306721 | orchestrator | 2025-09-19 11:54:51 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:51.306818 | orchestrator | 2025-09-19 11:54:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:54.359039 | orchestrator | 2025-09-19 11:54:54 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:54.359174 | orchestrator | 2025-09-19 11:54:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:54:57.410573 | orchestrator | 2025-09-19 11:54:57 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:54:57.410666 | orchestrator | 2025-09-19 11:54:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:00.452199 | orchestrator | 2025-09-19 11:55:00 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:00.452323 | orchestrator | 2025-09-19 11:55:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:03.491098 | orchestrator | 2025-09-19 11:55:03 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:03.491191 | orchestrator | 2025-09-19 11:55:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:06.530749 | orchestrator | 2025-09-19 11:55:06 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:06.530847 | orchestrator | 2025-09-19 11:55:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:09.583301 | orchestrator | 2025-09-19 11:55:09 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:09.583397 | orchestrator | 2025-09-19 11:55:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:12.629437 | orchestrator | 2025-09-19 11:55:12 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:12.629546 | orchestrator | 2025-09-19 11:55:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:15.682496 | orchestrator | 2025-09-19 11:55:15 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:15.682595 | orchestrator | 2025-09-19 11:55:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:18.737710 | orchestrator | 2025-09-19 11:55:18 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:18.737850 | orchestrator | 2025-09-19 11:55:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:21.788877 | orchestrator | 2025-09-19 11:55:21 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:21.789025 | orchestrator | 2025-09-19 11:55:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:24.826932 | orchestrator | 2025-09-19 11:55:24 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:24.827071 | orchestrator | 2025-09-19 11:55:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:27.872066 | orchestrator | 2025-09-19 11:55:27 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:27.872154 | orchestrator | 2025-09-19 11:55:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:30.913801 | orchestrator | 2025-09-19 11:55:30 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:30.913907 | orchestrator | 2025-09-19 11:55:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:33.961385 | orchestrator | 2025-09-19 11:55:33 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:33.961491 | orchestrator | 2025-09-19 11:55:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:37.009804 | orchestrator | 2025-09-19 11:55:37 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:37.009910 | orchestrator | 2025-09-19 11:55:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:40.054905 | orchestrator | 2025-09-19 11:55:40 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:40.054991 | orchestrator | 2025-09-19 11:55:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:43.096037 | orchestrator | 2025-09-19 11:55:43 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:43.096146 | orchestrator | 2025-09-19 11:55:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:46.149010 | orchestrator | 2025-09-19 11:55:46 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:46.149116 | orchestrator | 2025-09-19 11:55:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:49.191514 | orchestrator | 2025-09-19 11:55:49 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:49.191598 | orchestrator | 2025-09-19 11:55:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:52.236893 | orchestrator | 2025-09-19 11:55:52 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:52.236997 | orchestrator | 2025-09-19 11:55:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:55.282487 | orchestrator | 2025-09-19 11:55:55 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:55.282639 | orchestrator | 2025-09-19 11:55:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:55:58.339424 | orchestrator | 2025-09-19 11:55:58 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:55:58.339516 | orchestrator | 2025-09-19 11:55:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:56:01.388871 | orchestrator | 2025-09-19 11:56:01 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:56:01.388965 | orchestrator | 2025-09-19 11:56:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:56:04.430500 | orchestrator | 2025-09-19 11:56:04 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:56:04.430579 | orchestrator | 2025-09-19 11:56:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:56:07.469653 | orchestrator | 2025-09-19 11:56:07 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:56:07.469733 | orchestrator | 2025-09-19 11:56:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:56:10.504137 | orchestrator | 2025-09-19 11:56:10 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:56:10.504244 | orchestrator | 2025-09-19 11:56:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:56:13.553570 | orchestrator | 2025-09-19 11:56:13 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:56:13.553738 | orchestrator | 2025-09-19 11:56:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:56:16.598702 | orchestrator | 2025-09-19 11:56:16 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:56:16.598853 | orchestrator | 2025-09-19 11:56:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:56:19.634425 | orchestrator | 2025-09-19 11:56:19 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:56:19.634511 | orchestrator | 2025-09-19 11:56:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:56:22.681319 | orchestrator | 2025-09-19 11:56:22 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:56:22.681422 | orchestrator | 2025-09-19 11:56:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:56:25.735778 | orchestrator | 2025-09-19 11:56:25 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:56:25.735926 | orchestrator | 2025-09-19 11:56:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:56:28.784080 | orchestrator | 2025-09-19 11:56:28 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:56:28.784182 | orchestrator | 2025-09-19 11:56:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:56:31.820040 | orchestrator | 2025-09-19 11:56:31 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:56:31.820123 | orchestrator | 2025-09-19 11:56:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:56:34.864406 | orchestrator | 2025-09-19 11:56:34 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:56:34.864516 | orchestrator | 2025-09-19 11:56:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:56:37.908513 | orchestrator | 2025-09-19 11:56:37 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state STARTED 2025-09-19 11:56:37.908611 | orchestrator | 2025-09-19 11:56:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:56:40.955370 | orchestrator | 2025-09-19 11:56:40 | INFO  | Task 75bab7b2-bfc4-41df-93ce-35a05987ba2b is in state SUCCESS 2025-09-19 11:56:40.956547 | orchestrator | 2025-09-19 11:56:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:56:40.957514 | orchestrator | 2025-09-19 11:56:40.957553 | orchestrator | 2025-09-19 11:56:40.957565 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:56:40.957577 | orchestrator | 2025-09-19 11:56:40.957588 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:56:40.957600 | orchestrator | Friday 19 September 2025 11:51:58 +0000 (0:00:00.194) 0:00:00.194 ****** 2025-09-19 11:56:40.957611 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:56:40.957624 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:56:40.957634 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:56:40.957646 | orchestrator | 2025-09-19 11:56:40.957657 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:56:40.957764 | orchestrator | Friday 19 September 2025 11:51:59 +0000 (0:00:00.220) 0:00:00.415 ****** 2025-09-19 11:56:40.958245 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-19 11:56:40.958283 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-19 11:56:40.958295 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-19 11:56:40.958306 | orchestrator | 2025-09-19 11:56:40.958317 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-19 11:56:40.958477 | orchestrator | 2025-09-19 11:56:40.958796 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 11:56:40.958822 | orchestrator | Friday 19 September 2025 11:51:59 +0000 (0:00:00.308) 0:00:00.724 ****** 2025-09-19 11:56:40.958834 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:56:40.958845 | orchestrator | 2025-09-19 11:56:40.958855 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-19 11:56:40.958867 | orchestrator | Friday 19 September 2025 11:51:59 +0000 (0:00:00.420) 0:00:01.144 ****** 2025-09-19 11:56:40.958878 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-19 11:56:40.958889 | orchestrator | 2025-09-19 11:56:40.958900 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-19 11:56:40.958910 | orchestrator | Friday 19 September 2025 11:52:03 +0000 (0:00:03.657) 0:00:04.801 ****** 2025-09-19 11:56:40.958921 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-19 11:56:40.958955 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-19 11:56:40.958967 | orchestrator | 2025-09-19 11:56:40.958977 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-19 11:56:40.958988 | orchestrator | Friday 19 September 2025 11:52:10 +0000 (0:00:07.028) 0:00:11.830 ****** 2025-09-19 11:56:40.958999 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:56:40.959009 | orchestrator | 2025-09-19 11:56:40.959020 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-19 11:56:40.959031 | orchestrator | Friday 19 September 2025 11:52:13 +0000 (0:00:03.275) 0:00:15.106 ****** 2025-09-19 11:56:40.959042 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:56:40.959068 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-19 11:56:40.959080 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-19 11:56:40.959110 | orchestrator | 2025-09-19 11:56:40.959122 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-19 11:56:40.959133 | orchestrator | Friday 19 September 2025 11:52:22 +0000 (0:00:08.291) 0:00:23.398 ****** 2025-09-19 11:56:40.959144 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:56:40.959154 | orchestrator | 2025-09-19 11:56:40.959165 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-19 11:56:40.959176 | orchestrator | Friday 19 September 2025 11:52:25 +0000 (0:00:03.730) 0:00:27.129 ****** 2025-09-19 11:56:40.959187 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-19 11:56:40.959197 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-19 11:56:40.959208 | orchestrator | 2025-09-19 11:56:40.959219 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-19 11:56:40.959229 | orchestrator | Friday 19 September 2025 11:52:33 +0000 (0:00:08.053) 0:00:35.182 ****** 2025-09-19 11:56:40.959240 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-19 11:56:40.959251 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-19 11:56:40.959261 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-19 11:56:40.959272 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-19 11:56:40.959282 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-19 11:56:40.959293 | orchestrator | 2025-09-19 11:56:40.959303 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 11:56:40.959314 | orchestrator | Friday 19 September 2025 11:52:50 +0000 (0:00:16.247) 0:00:51.430 ****** 2025-09-19 11:56:40.959325 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:56:40.959366 | orchestrator | 2025-09-19 11:56:40.959400 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-19 11:56:40.959417 | orchestrator | Friday 19 September 2025 11:52:50 +0000 (0:00:00.663) 0:00:52.093 ****** 2025-09-19 11:56:40.959445 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.959465 | orchestrator | 2025-09-19 11:56:40.959482 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-19 11:56:40.959500 | orchestrator | Friday 19 September 2025 11:52:55 +0000 (0:00:04.501) 0:00:56.595 ****** 2025-09-19 11:56:40.959517 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.959536 | orchestrator | 2025-09-19 11:56:40.959555 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-19 11:56:40.959634 | orchestrator | Friday 19 September 2025 11:52:59 +0000 (0:00:04.597) 0:01:01.192 ****** 2025-09-19 11:56:40.959655 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:56:40.959673 | orchestrator | 2025-09-19 11:56:40.959690 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-19 11:56:40.959709 | orchestrator | Friday 19 September 2025 11:53:03 +0000 (0:00:03.257) 0:01:04.450 ****** 2025-09-19 11:56:40.959721 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-19 11:56:40.959734 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-19 11:56:40.959752 | orchestrator | 2025-09-19 11:56:40.959771 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-09-19 11:56:40.959789 | orchestrator | Friday 19 September 2025 11:53:13 +0000 (0:00:10.104) 0:01:14.554 ****** 2025-09-19 11:56:40.959807 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-09-19 11:56:40.959826 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-09-19 11:56:40.959845 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-09-19 11:56:40.959864 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-09-19 11:56:40.959881 | orchestrator | 2025-09-19 11:56:40.959900 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-09-19 11:56:40.959918 | orchestrator | Friday 19 September 2025 11:53:28 +0000 (0:00:15.541) 0:01:30.096 ****** 2025-09-19 11:56:40.959962 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.959982 | orchestrator | 2025-09-19 11:56:40.960000 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-09-19 11:56:40.960018 | orchestrator | Friday 19 September 2025 11:53:34 +0000 (0:00:05.582) 0:01:35.678 ****** 2025-09-19 11:56:40.960036 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.960055 | orchestrator | 2025-09-19 11:56:40.960069 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-09-19 11:56:40.960080 | orchestrator | Friday 19 September 2025 11:53:39 +0000 (0:00:05.166) 0:01:40.845 ****** 2025-09-19 11:56:40.960091 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:56:40.960102 | orchestrator | 2025-09-19 11:56:40.960113 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-09-19 11:56:40.960123 | orchestrator | Friday 19 September 2025 11:53:39 +0000 (0:00:00.204) 0:01:41.050 ****** 2025-09-19 11:56:40.960134 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.960145 | orchestrator | 2025-09-19 11:56:40.960156 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 11:56:40.960167 | orchestrator | Friday 19 September 2025 11:53:45 +0000 (0:00:05.629) 0:01:46.679 ****** 2025-09-19 11:56:40.960190 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:56:40.960210 | orchestrator | 2025-09-19 11:56:40.960229 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-09-19 11:56:40.960261 | orchestrator | Friday 19 September 2025 11:53:46 +0000 (0:00:01.024) 0:01:47.703 ****** 2025-09-19 11:56:40.960281 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:56:40.960293 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.960304 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:56:40.960315 | orchestrator | 2025-09-19 11:56:40.960325 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-09-19 11:56:40.960336 | orchestrator | Friday 19 September 2025 11:53:51 +0000 (0:00:05.422) 0:01:53.126 ****** 2025-09-19 11:56:40.960347 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:56:40.960358 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.960368 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:56:40.960379 | orchestrator | 2025-09-19 11:56:40.960390 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-09-19 11:56:40.960400 | orchestrator | Friday 19 September 2025 11:53:56 +0000 (0:00:04.758) 0:01:57.884 ****** 2025-09-19 11:56:40.960411 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.960422 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:56:40.960433 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:56:40.960443 | orchestrator | 2025-09-19 11:56:40.960454 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-09-19 11:56:40.960465 | orchestrator | Friday 19 September 2025 11:53:57 +0000 (0:00:00.832) 0:01:58.717 ****** 2025-09-19 11:56:40.960476 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:56:40.960487 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:56:40.960497 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:56:40.960508 | orchestrator | 2025-09-19 11:56:40.960519 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-09-19 11:56:40.960530 | orchestrator | Friday 19 September 2025 11:53:59 +0000 (0:00:02.094) 0:02:00.811 ****** 2025-09-19 11:56:40.960541 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:56:40.960551 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:56:40.960562 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.960573 | orchestrator | 2025-09-19 11:56:40.960584 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-09-19 11:56:40.960595 | orchestrator | Friday 19 September 2025 11:54:00 +0000 (0:00:01.388) 0:02:02.200 ****** 2025-09-19 11:56:40.960605 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.960616 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:56:40.960627 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:56:40.960638 | orchestrator | 2025-09-19 11:56:40.960649 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-09-19 11:56:40.960659 | orchestrator | Friday 19 September 2025 11:54:02 +0000 (0:00:01.169) 0:02:03.369 ****** 2025-09-19 11:56:40.960670 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.960681 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:56:40.960692 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:56:40.960703 | orchestrator | 2025-09-19 11:56:40.960753 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-09-19 11:56:40.960765 | orchestrator | Friday 19 September 2025 11:54:04 +0000 (0:00:02.032) 0:02:05.401 ****** 2025-09-19 11:56:40.960776 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.960787 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:56:40.960798 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:56:40.960808 | orchestrator | 2025-09-19 11:56:40.960819 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-09-19 11:56:40.960830 | orchestrator | Friday 19 September 2025 11:54:05 +0000 (0:00:01.592) 0:02:06.994 ****** 2025-09-19 11:56:40.960841 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:56:40.960852 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:56:40.960863 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:56:40.960873 | orchestrator | 2025-09-19 11:56:40.960884 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-09-19 11:56:40.960895 | orchestrator | Friday 19 September 2025 11:54:06 +0000 (0:00:00.902) 0:02:07.897 ****** 2025-09-19 11:56:40.960914 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:56:40.960925 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:56:40.960976 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:56:40.960988 | orchestrator | 2025-09-19 11:56:40.960999 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 11:56:40.961010 | orchestrator | Friday 19 September 2025 11:54:09 +0000 (0:00:02.812) 0:02:10.710 ****** 2025-09-19 11:56:40.961021 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:56:40.961031 | orchestrator | 2025-09-19 11:56:40.961042 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-09-19 11:56:40.961053 | orchestrator | Friday 19 September 2025 11:54:10 +0000 (0:00:00.526) 0:02:11.236 ****** 2025-09-19 11:56:40.961064 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:56:40.961075 | orchestrator | 2025-09-19 11:56:40.961085 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-19 11:56:40.961096 | orchestrator | Friday 19 September 2025 11:54:13 +0000 (0:00:03.853) 0:02:15.089 ****** 2025-09-19 11:56:40.961107 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:56:40.961118 | orchestrator | 2025-09-19 11:56:40.961129 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-09-19 11:56:40.961139 | orchestrator | Friday 19 September 2025 11:54:17 +0000 (0:00:03.279) 0:02:18.369 ****** 2025-09-19 11:56:40.961150 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-19 11:56:40.961161 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-19 11:56:40.961172 | orchestrator | 2025-09-19 11:56:40.961183 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-09-19 11:56:40.961194 | orchestrator | Friday 19 September 2025 11:54:24 +0000 (0:00:07.161) 0:02:25.530 ****** 2025-09-19 11:56:40.961205 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:56:40.961215 | orchestrator | 2025-09-19 11:56:40.961227 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-09-19 11:56:40.961241 | orchestrator | Friday 19 September 2025 11:54:27 +0000 (0:00:03.410) 0:02:28.941 ****** 2025-09-19 11:56:40.961260 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:56:40.961287 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:56:40.961305 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:56:40.961321 | orchestrator | 2025-09-19 11:56:40.961340 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-09-19 11:56:40.961358 | orchestrator | Friday 19 September 2025 11:54:28 +0000 (0:00:00.367) 0:02:29.309 ****** 2025-09-19 11:56:40.961382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:56:40.961444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:56:40.961468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:56:40.961480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:56:40.961494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:56:40.961512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:56:40.961524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.961537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.961582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.961595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.961608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.961624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.961637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:56:40.961649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:56:40.961660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:56:40.961680 | orchestrator | 2025-09-19 11:56:40.961691 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-09-19 11:56:40.961702 | orchestrator | Friday 19 September 2025 11:54:30 +0000 (0:00:02.495) 0:02:31.804 ****** 2025-09-19 11:56:40.961714 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:56:40.961724 | orchestrator | 2025-09-19 11:56:40.961760 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-09-19 11:56:40.961772 | orchestrator | Friday 19 September 2025 11:54:30 +0000 (0:00:00.160) 0:02:31.964 ****** 2025-09-19 11:56:40.961783 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:56:40.961794 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:56:40.961805 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:56:40.961815 | orchestrator | 2025-09-19 11:56:40.961826 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-09-19 11:56:40.961837 | orchestrator | Friday 19 September 2025 11:54:31 +0000 (0:00:00.490) 0:02:32.455 ****** 2025-09-19 11:56:40.961849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:56:40.961861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:56:40.961877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.961889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.961908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:56:40.961919 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:56:40.961978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:56:40.961992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:56:40.962003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.962068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.962084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:56:40.962103 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:56:40.962115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:56:40.962158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:56:40.962171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.962188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.962199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:56:40.962211 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:56:40.962222 | orchestrator | 2025-09-19 11:56:40.962233 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 11:56:40.962244 | orchestrator | Friday 19 September 2025 11:54:31 +0000 (0:00:00.667) 0:02:33.122 ****** 2025-09-19 11:56:40.962287 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:56:40.962306 | orchestrator | 2025-09-19 11:56:40.962318 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-09-19 11:56:40.962329 | orchestrator | Friday 19 September 2025 11:54:32 +0000 (0:00:00.594) 0:02:33.717 ****** 2025-09-19 11:56:40.962340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:56:40.962380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:56:40.962393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:56:40.962405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:56:40.962421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:56:40.962439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:56:40.962451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.962462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.962480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.962492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.962503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.962519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.962538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:56:40.962550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:56:40.962571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:56:40.962583 | orchestrator | 2025-09-19 11:56:40.962594 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-09-19 11:56:40.962605 | orchestrator | Friday 19 September 2025 11:54:37 +0000 (0:00:05.270) 0:02:38.987 ****** 2025-09-19 11:56:40.962617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:56:40.962628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:56:40.962651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.962662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.962674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:56:40.962685 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:56:40.962703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:56:40.962715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:56:40.962726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.962744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.962760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:56:40.962771 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:56:40.962783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:56:40.962799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:56:40.962811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.962823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.962840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:56:40.962852 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:56:40.962863 | orchestrator | 2025-09-19 11:56:40.962874 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-09-19 11:56:40.962885 | orchestrator | Friday 19 September 2025 11:54:38 +0000 (0:00:00.972) 0:02:39.960 ****** 2025-09-19 11:56:40.962901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:56:40.962913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:56:40.962925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.962962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.962975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:56:40.962993 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:56:40.963013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:56:40.963024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:56:40.963036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.963048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.963066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:56:40.963078 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:56:40.963089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:56:40.963107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:56:40.963123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.963135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:56:40.963147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:56:40.963158 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:56:40.963169 | orchestrator | 2025-09-19 11:56:40.963180 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-09-19 11:56:40.963191 | orchestrator | Friday 19 September 2025 11:54:39 +0000 (0:00:00.874) 0:02:40.834 ****** 2025-09-19 11:56:40.963211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:56:40.963230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:56:40.963247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:56:40.963259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:56:40.963270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:56:40.963282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:56:40.963299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963429 | orchestrator | 2025-09-19 11:56:40.963440 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-09-19 11:56:40.963451 | orchestrator | Friday 19 September 2025 11:54:44 +0000 (0:00:05.167) 0:02:46.002 ****** 2025-09-19 11:56:40.963463 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-19 11:56:40.963474 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-19 11:56:40.963485 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-19 11:56:40.963496 | orchestrator | 2025-09-19 11:56:40.963508 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-09-19 11:56:40.963519 | orchestrator | Friday 19 September 2025 11:54:46 +0000 (0:00:02.136) 0:02:48.138 ****** 2025-09-19 11:56:40.963536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:56:40.963548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:56:40.963573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:56:40.963585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:56:40.963597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:56:40.963613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:56:40.963625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:56:40.963778 | orchestrator | 2025-09-19 11:56:40.963789 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-09-19 11:56:40.963800 | orchestrator | Friday 19 September 2025 11:55:02 +0000 (0:00:15.985) 0:03:04.124 ****** 2025-09-19 11:56:40.963811 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.963823 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:56:40.963834 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:56:40.963844 | orchestrator | 2025-09-19 11:56:40.963855 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-09-19 11:56:40.963866 | orchestrator | Friday 19 September 2025 11:55:04 +0000 (0:00:01.445) 0:03:05.569 ****** 2025-09-19 11:56:40.963877 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-19 11:56:40.963888 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-19 11:56:40.963905 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-19 11:56:40.963916 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-19 11:56:40.963927 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-19 11:56:40.963957 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-19 11:56:40.963968 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-19 11:56:40.963979 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-19 11:56:40.963990 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-19 11:56:40.964001 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-19 11:56:40.964012 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-19 11:56:40.964023 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-19 11:56:40.964034 | orchestrator | 2025-09-19 11:56:40.964045 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-09-19 11:56:40.964056 | orchestrator | Friday 19 September 2025 11:55:09 +0000 (0:00:05.186) 0:03:10.756 ****** 2025-09-19 11:56:40.964067 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-19 11:56:40.964078 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-19 11:56:40.964089 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-19 11:56:40.964099 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-19 11:56:40.964110 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-19 11:56:40.964121 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-19 11:56:40.964132 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-19 11:56:40.964143 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-19 11:56:40.964154 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-19 11:56:40.964165 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-19 11:56:40.964176 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-19 11:56:40.964187 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-19 11:56:40.964197 | orchestrator | 2025-09-19 11:56:40.964208 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-09-19 11:56:40.964219 | orchestrator | Friday 19 September 2025 11:55:14 +0000 (0:00:05.389) 0:03:16.146 ****** 2025-09-19 11:56:40.964230 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-19 11:56:40.964241 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-19 11:56:40.964252 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-19 11:56:40.964263 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-19 11:56:40.964274 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-19 11:56:40.964290 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-19 11:56:40.964311 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-19 11:56:40.964322 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-19 11:56:40.964332 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-19 11:56:40.964343 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-19 11:56:40.964354 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-19 11:56:40.964365 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-19 11:56:40.964376 | orchestrator | 2025-09-19 11:56:40.964387 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-09-19 11:56:40.964398 | orchestrator | Friday 19 September 2025 11:55:20 +0000 (0:00:05.514) 0:03:21.660 ****** 2025-09-19 11:56:40.964410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:56:40.964429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:56:40.964442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:56:40.964453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:56:40.964476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:56:40.964488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:56:40.964500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.964517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.964529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.964541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.964552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.964578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:56:40.964590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:56:40.964602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:56:40.964620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:56:40.964631 | orchestrator | 2025-09-19 11:56:40.964642 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 11:56:40.964654 | orchestrator | Friday 19 September 2025 11:55:24 +0000 (0:00:03.900) 0:03:25.561 ****** 2025-09-19 11:56:40.964665 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:56:40.964676 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:56:40.964687 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:56:40.964699 | orchestrator | 2025-09-19 11:56:40.964710 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-09-19 11:56:40.964721 | orchestrator | Friday 19 September 2025 11:55:24 +0000 (0:00:00.320) 0:03:25.882 ****** 2025-09-19 11:56:40.964732 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.964743 | orchestrator | 2025-09-19 11:56:40.964753 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-09-19 11:56:40.964764 | orchestrator | Friday 19 September 2025 11:55:26 +0000 (0:00:02.148) 0:03:28.031 ****** 2025-09-19 11:56:40.964775 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.964786 | orchestrator | 2025-09-19 11:56:40.964797 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-09-19 11:56:40.964808 | orchestrator | Friday 19 September 2025 11:55:28 +0000 (0:00:02.147) 0:03:30.179 ****** 2025-09-19 11:56:40.964825 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.964836 | orchestrator | 2025-09-19 11:56:40.964847 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-09-19 11:56:40.964858 | orchestrator | Friday 19 September 2025 11:55:31 +0000 (0:00:02.261) 0:03:32.440 ****** 2025-09-19 11:56:40.964869 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.964880 | orchestrator | 2025-09-19 11:56:40.964891 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-09-19 11:56:40.964902 | orchestrator | Friday 19 September 2025 11:55:33 +0000 (0:00:02.326) 0:03:34.767 ****** 2025-09-19 11:56:40.964913 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.964923 | orchestrator | 2025-09-19 11:56:40.964991 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-19 11:56:40.965004 | orchestrator | Friday 19 September 2025 11:55:54 +0000 (0:00:21.031) 0:03:55.799 ****** 2025-09-19 11:56:40.965015 | orchestrator | 2025-09-19 11:56:40.965026 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-19 11:56:40.965037 | orchestrator | Friday 19 September 2025 11:55:54 +0000 (0:00:00.069) 0:03:55.868 ****** 2025-09-19 11:56:40.965048 | orchestrator | 2025-09-19 11:56:40.965059 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-19 11:56:40.965070 | orchestrator | Friday 19 September 2025 11:55:54 +0000 (0:00:00.069) 0:03:55.938 ****** 2025-09-19 11:56:40.965081 | orchestrator | 2025-09-19 11:56:40.965093 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-09-19 11:56:40.965104 | orchestrator | Friday 19 September 2025 11:55:54 +0000 (0:00:00.067) 0:03:56.006 ****** 2025-09-19 11:56:40.965121 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.965132 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:56:40.965143 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:56:40.965154 | orchestrator | 2025-09-19 11:56:40.965165 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-09-19 11:56:40.965176 | orchestrator | Friday 19 September 2025 11:56:11 +0000 (0:00:16.414) 0:04:12.421 ****** 2025-09-19 11:56:40.965187 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.965198 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:56:40.965209 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:56:40.965220 | orchestrator | 2025-09-19 11:56:40.965231 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-09-19 11:56:40.965242 | orchestrator | Friday 19 September 2025 11:56:22 +0000 (0:00:11.581) 0:04:24.002 ****** 2025-09-19 11:56:40.965253 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.965264 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:56:40.965275 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:56:40.965286 | orchestrator | 2025-09-19 11:56:40.965297 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-09-19 11:56:40.965308 | orchestrator | Friday 19 September 2025 11:56:28 +0000 (0:00:05.537) 0:04:29.540 ****** 2025-09-19 11:56:40.965318 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.965327 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:56:40.965337 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:56:40.965347 | orchestrator | 2025-09-19 11:56:40.965356 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-09-19 11:56:40.965366 | orchestrator | Friday 19 September 2025 11:56:33 +0000 (0:00:05.611) 0:04:35.151 ****** 2025-09-19 11:56:40.965376 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:56:40.965386 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:56:40.965395 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:56:40.965405 | orchestrator | 2025-09-19 11:56:40.965415 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:56:40.965424 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 11:56:40.965441 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:56:40.965451 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:56:40.965461 | orchestrator | 2025-09-19 11:56:40.965471 | orchestrator | 2025-09-19 11:56:40.965480 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:56:40.965490 | orchestrator | Friday 19 September 2025 11:56:39 +0000 (0:00:05.537) 0:04:40.689 ****** 2025-09-19 11:56:40.965505 | orchestrator | =============================================================================== 2025-09-19 11:56:40.965515 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.03s 2025-09-19 11:56:40.965525 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.41s 2025-09-19 11:56:40.965535 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.25s 2025-09-19 11:56:40.965544 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.99s 2025-09-19 11:56:40.965554 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.54s 2025-09-19 11:56:40.965564 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.58s 2025-09-19 11:56:40.965574 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.10s 2025-09-19 11:56:40.965584 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.29s 2025-09-19 11:56:40.965593 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.05s 2025-09-19 11:56:40.965603 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.16s 2025-09-19 11:56:40.965613 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.03s 2025-09-19 11:56:40.965622 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.63s 2025-09-19 11:56:40.965632 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.61s 2025-09-19 11:56:40.965642 | orchestrator | octavia : Create loadbalancer management network ------------------------ 5.58s 2025-09-19 11:56:40.965651 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.54s 2025-09-19 11:56:40.965661 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.54s 2025-09-19 11:56:40.965671 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.51s 2025-09-19 11:56:40.965680 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.42s 2025-09-19 11:56:40.965690 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.39s 2025-09-19 11:56:40.965699 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.27s 2025-09-19 11:56:43.999254 | orchestrator | 2025-09-19 11:56:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:56:47.044935 | orchestrator | 2025-09-19 11:56:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:56:50.087640 | orchestrator | 2025-09-19 11:56:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:56:53.129336 | orchestrator | 2025-09-19 11:56:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:56:56.176217 | orchestrator | 2025-09-19 11:56:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:56:59.216422 | orchestrator | 2025-09-19 11:56:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:57:02.262463 | orchestrator | 2025-09-19 11:57:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:57:05.306479 | orchestrator | 2025-09-19 11:57:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:57:08.352705 | orchestrator | 2025-09-19 11:57:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:57:11.395481 | orchestrator | 2025-09-19 11:57:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:57:14.439891 | orchestrator | 2025-09-19 11:57:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:57:17.479183 | orchestrator | 2025-09-19 11:57:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:57:20.525761 | orchestrator | 2025-09-19 11:57:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:57:23.572282 | orchestrator | 2025-09-19 11:57:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:57:26.612609 | orchestrator | 2025-09-19 11:57:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:57:29.653217 | orchestrator | 2025-09-19 11:57:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:57:32.697540 | orchestrator | 2025-09-19 11:57:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:57:35.747874 | orchestrator | 2025-09-19 11:57:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:57:38.794502 | orchestrator | 2025-09-19 11:57:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:57:41.843878 | orchestrator | 2025-09-19 11:57:42.151827 | orchestrator | 2025-09-19 11:57:42.156601 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Sep 19 11:57:42 UTC 2025 2025-09-19 11:57:42.158298 | orchestrator | 2025-09-19 11:57:42.584929 | orchestrator | ok: Runtime: 0:33:09.976503 2025-09-19 11:57:42.827163 | 2025-09-19 11:57:42.827340 | TASK [Bootstrap services] 2025-09-19 11:57:43.568147 | orchestrator | 2025-09-19 11:57:43.568337 | orchestrator | # BOOTSTRAP 2025-09-19 11:57:43.568362 | orchestrator | 2025-09-19 11:57:43.568376 | orchestrator | + set -e 2025-09-19 11:57:43.568389 | orchestrator | + echo 2025-09-19 11:57:43.568404 | orchestrator | + echo '# BOOTSTRAP' 2025-09-19 11:57:43.568421 | orchestrator | + echo 2025-09-19 11:57:43.568467 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-19 11:57:43.577557 | orchestrator | + set -e 2025-09-19 11:57:43.577637 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-19 11:57:47.817319 | orchestrator | 2025-09-19 11:57:47 | INFO  | It takes a moment until task f7d8809d-4602-4208-93d7-10dfb4d67e05 (flavor-manager) has been started and output is visible here. 2025-09-19 11:57:55.492417 | orchestrator | 2025-09-19 11:57:50 | INFO  | Flavor SCS-1L-1 created 2025-09-19 11:57:55.492558 | orchestrator | 2025-09-19 11:57:51 | INFO  | Flavor SCS-1L-1-5 created 2025-09-19 11:57:55.492644 | orchestrator | 2025-09-19 11:57:51 | INFO  | Flavor SCS-1V-2 created 2025-09-19 11:57:55.492667 | orchestrator | 2025-09-19 11:57:51 | INFO  | Flavor SCS-1V-2-5 created 2025-09-19 11:57:55.492687 | orchestrator | 2025-09-19 11:57:51 | INFO  | Flavor SCS-1V-4 created 2025-09-19 11:57:55.492703 | orchestrator | 2025-09-19 11:57:51 | INFO  | Flavor SCS-1V-4-10 created 2025-09-19 11:57:55.492714 | orchestrator | 2025-09-19 11:57:51 | INFO  | Flavor SCS-1V-8 created 2025-09-19 11:57:55.492727 | orchestrator | 2025-09-19 11:57:52 | INFO  | Flavor SCS-1V-8-20 created 2025-09-19 11:57:55.492750 | orchestrator | 2025-09-19 11:57:52 | INFO  | Flavor SCS-2V-4 created 2025-09-19 11:57:55.492761 | orchestrator | 2025-09-19 11:57:52 | INFO  | Flavor SCS-2V-4-10 created 2025-09-19 11:57:55.492772 | orchestrator | 2025-09-19 11:57:52 | INFO  | Flavor SCS-2V-8 created 2025-09-19 11:57:55.492783 | orchestrator | 2025-09-19 11:57:52 | INFO  | Flavor SCS-2V-8-20 created 2025-09-19 11:57:55.492794 | orchestrator | 2025-09-19 11:57:52 | INFO  | Flavor SCS-2V-16 created 2025-09-19 11:57:55.492805 | orchestrator | 2025-09-19 11:57:53 | INFO  | Flavor SCS-2V-16-50 created 2025-09-19 11:57:55.492816 | orchestrator | 2025-09-19 11:57:53 | INFO  | Flavor SCS-4V-8 created 2025-09-19 11:57:55.492827 | orchestrator | 2025-09-19 11:57:53 | INFO  | Flavor SCS-4V-8-20 created 2025-09-19 11:57:55.492838 | orchestrator | 2025-09-19 11:57:53 | INFO  | Flavor SCS-4V-16 created 2025-09-19 11:57:55.492848 | orchestrator | 2025-09-19 11:57:53 | INFO  | Flavor SCS-4V-16-50 created 2025-09-19 11:57:55.492859 | orchestrator | 2025-09-19 11:57:53 | INFO  | Flavor SCS-4V-32 created 2025-09-19 11:57:55.492870 | orchestrator | 2025-09-19 11:57:53 | INFO  | Flavor SCS-4V-32-100 created 2025-09-19 11:57:55.492881 | orchestrator | 2025-09-19 11:57:54 | INFO  | Flavor SCS-8V-16 created 2025-09-19 11:57:55.492892 | orchestrator | 2025-09-19 11:57:54 | INFO  | Flavor SCS-8V-16-50 created 2025-09-19 11:57:55.492903 | orchestrator | 2025-09-19 11:57:54 | INFO  | Flavor SCS-8V-32 created 2025-09-19 11:57:55.492914 | orchestrator | 2025-09-19 11:57:54 | INFO  | Flavor SCS-8V-32-100 created 2025-09-19 11:57:55.492925 | orchestrator | 2025-09-19 11:57:54 | INFO  | Flavor SCS-16V-32 created 2025-09-19 11:57:55.492936 | orchestrator | 2025-09-19 11:57:54 | INFO  | Flavor SCS-16V-32-100 created 2025-09-19 11:57:55.492947 | orchestrator | 2025-09-19 11:57:55 | INFO  | Flavor SCS-2V-4-20s created 2025-09-19 11:57:55.492958 | orchestrator | 2025-09-19 11:57:55 | INFO  | Flavor SCS-4V-8-50s created 2025-09-19 11:57:55.492969 | orchestrator | 2025-09-19 11:57:55 | INFO  | Flavor SCS-8V-32-100s created 2025-09-19 11:57:57.724842 | orchestrator | 2025-09-19 11:57:57 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-09-19 11:58:07.875460 | orchestrator | 2025-09-19 11:58:07 | INFO  | Task e8b8b0aa-c1d7-4da8-bf3f-843376a3bb32 (bootstrap-basic) was prepared for execution. 2025-09-19 11:58:07.875571 | orchestrator | 2025-09-19 11:58:07 | INFO  | It takes a moment until task e8b8b0aa-c1d7-4da8-bf3f-843376a3bb32 (bootstrap-basic) has been started and output is visible here. 2025-09-19 11:59:07.863466 | orchestrator | 2025-09-19 11:59:07.863586 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-09-19 11:59:07.863604 | orchestrator | 2025-09-19 11:59:07.863617 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 11:59:07.863629 | orchestrator | Friday 19 September 2025 11:58:12 +0000 (0:00:00.099) 0:00:00.099 ****** 2025-09-19 11:59:07.863640 | orchestrator | ok: [localhost] 2025-09-19 11:59:07.863652 | orchestrator | 2025-09-19 11:59:07.863664 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-09-19 11:59:07.863675 | orchestrator | Friday 19 September 2025 11:58:13 +0000 (0:00:01.807) 0:00:01.907 ****** 2025-09-19 11:59:07.863686 | orchestrator | ok: [localhost] 2025-09-19 11:59:07.863696 | orchestrator | 2025-09-19 11:59:07.863708 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-09-19 11:59:07.863719 | orchestrator | Friday 19 September 2025 11:58:21 +0000 (0:00:07.991) 0:00:09.898 ****** 2025-09-19 11:59:07.863730 | orchestrator | changed: [localhost] 2025-09-19 11:59:07.863742 | orchestrator | 2025-09-19 11:59:07.863753 | orchestrator | TASK [Get volume type local] *************************************************** 2025-09-19 11:59:07.863764 | orchestrator | Friday 19 September 2025 11:58:29 +0000 (0:00:07.698) 0:00:17.597 ****** 2025-09-19 11:59:07.863775 | orchestrator | ok: [localhost] 2025-09-19 11:59:07.863786 | orchestrator | 2025-09-19 11:59:07.863797 | orchestrator | TASK [Create volume type local] ************************************************ 2025-09-19 11:59:07.863808 | orchestrator | Friday 19 September 2025 11:58:36 +0000 (0:00:07.171) 0:00:24.769 ****** 2025-09-19 11:59:07.863824 | orchestrator | changed: [localhost] 2025-09-19 11:59:07.863835 | orchestrator | 2025-09-19 11:59:07.863846 | orchestrator | TASK [Create public network] *************************************************** 2025-09-19 11:59:07.863857 | orchestrator | Friday 19 September 2025 11:58:43 +0000 (0:00:07.211) 0:00:31.981 ****** 2025-09-19 11:59:07.863868 | orchestrator | changed: [localhost] 2025-09-19 11:59:07.863879 | orchestrator | 2025-09-19 11:59:07.863890 | orchestrator | TASK [Set public network to default] ******************************************* 2025-09-19 11:59:07.863900 | orchestrator | Friday 19 September 2025 11:58:49 +0000 (0:00:05.787) 0:00:37.768 ****** 2025-09-19 11:59:07.863911 | orchestrator | changed: [localhost] 2025-09-19 11:59:07.863922 | orchestrator | 2025-09-19 11:59:07.863933 | orchestrator | TASK [Create public subnet] **************************************************** 2025-09-19 11:59:07.863955 | orchestrator | Friday 19 September 2025 11:58:56 +0000 (0:00:06.266) 0:00:44.035 ****** 2025-09-19 11:59:07.863967 | orchestrator | changed: [localhost] 2025-09-19 11:59:07.863979 | orchestrator | 2025-09-19 11:59:07.863992 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-09-19 11:59:07.864005 | orchestrator | Friday 19 September 2025 11:59:00 +0000 (0:00:04.227) 0:00:48.263 ****** 2025-09-19 11:59:07.864018 | orchestrator | changed: [localhost] 2025-09-19 11:59:07.864031 | orchestrator | 2025-09-19 11:59:07.864043 | orchestrator | TASK [Create manager role] ***************************************************** 2025-09-19 11:59:07.864055 | orchestrator | Friday 19 September 2025 11:59:04 +0000 (0:00:03.800) 0:00:52.063 ****** 2025-09-19 11:59:07.864068 | orchestrator | ok: [localhost] 2025-09-19 11:59:07.864081 | orchestrator | 2025-09-19 11:59:07.864094 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:59:07.864106 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:59:07.864118 | orchestrator | 2025-09-19 11:59:07.864129 | orchestrator | 2025-09-19 11:59:07.864140 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:59:07.864180 | orchestrator | Friday 19 September 2025 11:59:07 +0000 (0:00:03.531) 0:00:55.595 ****** 2025-09-19 11:59:07.864192 | orchestrator | =============================================================================== 2025-09-19 11:59:07.864203 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.99s 2025-09-19 11:59:07.864213 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.70s 2025-09-19 11:59:07.864249 | orchestrator | Create volume type local ------------------------------------------------ 7.21s 2025-09-19 11:59:07.864260 | orchestrator | Get volume type local --------------------------------------------------- 7.17s 2025-09-19 11:59:07.864271 | orchestrator | Set public network to default ------------------------------------------- 6.27s 2025-09-19 11:59:07.864282 | orchestrator | Create public network --------------------------------------------------- 5.79s 2025-09-19 11:59:07.864293 | orchestrator | Create public subnet ---------------------------------------------------- 4.23s 2025-09-19 11:59:07.864303 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.80s 2025-09-19 11:59:07.864314 | orchestrator | Create manager role ----------------------------------------------------- 3.53s 2025-09-19 11:59:07.864325 | orchestrator | Gathering Facts --------------------------------------------------------- 1.81s 2025-09-19 11:59:10.292594 | orchestrator | 2025-09-19 11:59:10 | INFO  | It takes a moment until task d953e68e-dcd4-42a7-8be8-643f9c58acec (image-manager) has been started and output is visible here. 2025-09-19 11:59:50.876842 | orchestrator | 2025-09-19 11:59:13 | INFO  | Processing image 'Cirros 0.6.2' 2025-09-19 11:59:50.876956 | orchestrator | 2025-09-19 11:59:13 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-09-19 11:59:50.876976 | orchestrator | 2025-09-19 11:59:13 | INFO  | Importing image Cirros 0.6.2 2025-09-19 11:59:50.876988 | orchestrator | 2025-09-19 11:59:13 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-19 11:59:50.877001 | orchestrator | 2025-09-19 11:59:15 | INFO  | Waiting for image to leave queued state... 2025-09-19 11:59:50.877013 | orchestrator | 2025-09-19 11:59:17 | INFO  | Waiting for import to complete... 2025-09-19 11:59:50.877024 | orchestrator | 2025-09-19 11:59:27 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-09-19 11:59:50.877035 | orchestrator | 2025-09-19 11:59:27 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-09-19 11:59:50.877046 | orchestrator | 2025-09-19 11:59:27 | INFO  | Setting internal_version = 0.6.2 2025-09-19 11:59:50.877058 | orchestrator | 2025-09-19 11:59:27 | INFO  | Setting image_original_user = cirros 2025-09-19 11:59:50.877069 | orchestrator | 2025-09-19 11:59:27 | INFO  | Adding tag os:cirros 2025-09-19 11:59:50.877097 | orchestrator | 2025-09-19 11:59:28 | INFO  | Setting property architecture: x86_64 2025-09-19 11:59:50.877179 | orchestrator | 2025-09-19 11:59:28 | INFO  | Setting property hw_disk_bus: scsi 2025-09-19 11:59:50.877190 | orchestrator | 2025-09-19 11:59:28 | INFO  | Setting property hw_rng_model: virtio 2025-09-19 11:59:50.877202 | orchestrator | 2025-09-19 11:59:28 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-19 11:59:50.877213 | orchestrator | 2025-09-19 11:59:29 | INFO  | Setting property hw_watchdog_action: reset 2025-09-19 11:59:50.877224 | orchestrator | 2025-09-19 11:59:29 | INFO  | Setting property hypervisor_type: qemu 2025-09-19 11:59:50.877234 | orchestrator | 2025-09-19 11:59:29 | INFO  | Setting property os_distro: cirros 2025-09-19 11:59:50.877245 | orchestrator | 2025-09-19 11:59:29 | INFO  | Setting property os_purpose: minimal 2025-09-19 11:59:50.877256 | orchestrator | 2025-09-19 11:59:29 | INFO  | Setting property replace_frequency: never 2025-09-19 11:59:50.877294 | orchestrator | 2025-09-19 11:59:30 | INFO  | Setting property uuid_validity: none 2025-09-19 11:59:50.877305 | orchestrator | 2025-09-19 11:59:30 | INFO  | Setting property provided_until: none 2025-09-19 11:59:50.877325 | orchestrator | 2025-09-19 11:59:30 | INFO  | Setting property image_description: Cirros 2025-09-19 11:59:50.877341 | orchestrator | 2025-09-19 11:59:30 | INFO  | Setting property image_name: Cirros 2025-09-19 11:59:50.877354 | orchestrator | 2025-09-19 11:59:30 | INFO  | Setting property internal_version: 0.6.2 2025-09-19 11:59:50.877367 | orchestrator | 2025-09-19 11:59:31 | INFO  | Setting property image_original_user: cirros 2025-09-19 11:59:50.877379 | orchestrator | 2025-09-19 11:59:31 | INFO  | Setting property os_version: 0.6.2 2025-09-19 11:59:50.877392 | orchestrator | 2025-09-19 11:59:31 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-19 11:59:50.877406 | orchestrator | 2025-09-19 11:59:31 | INFO  | Setting property image_build_date: 2023-05-30 2025-09-19 11:59:50.877419 | orchestrator | 2025-09-19 11:59:32 | INFO  | Checking status of 'Cirros 0.6.2' 2025-09-19 11:59:50.877431 | orchestrator | 2025-09-19 11:59:32 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-09-19 11:59:50.877443 | orchestrator | 2025-09-19 11:59:32 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-09-19 11:59:50.877455 | orchestrator | 2025-09-19 11:59:32 | INFO  | Processing image 'Cirros 0.6.3' 2025-09-19 11:59:50.877468 | orchestrator | 2025-09-19 11:59:32 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-09-19 11:59:50.877481 | orchestrator | 2025-09-19 11:59:32 | INFO  | Importing image Cirros 0.6.3 2025-09-19 11:59:50.877493 | orchestrator | 2025-09-19 11:59:32 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-19 11:59:50.877505 | orchestrator | 2025-09-19 11:59:33 | INFO  | Waiting for image to leave queued state... 2025-09-19 11:59:50.877518 | orchestrator | 2025-09-19 11:59:35 | INFO  | Waiting for import to complete... 2025-09-19 11:59:50.877577 | orchestrator | 2025-09-19 11:59:45 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-09-19 11:59:50.877592 | orchestrator | 2025-09-19 11:59:46 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-09-19 11:59:50.877604 | orchestrator | 2025-09-19 11:59:46 | INFO  | Setting internal_version = 0.6.3 2025-09-19 11:59:50.877617 | orchestrator | 2025-09-19 11:59:46 | INFO  | Setting image_original_user = cirros 2025-09-19 11:59:50.877629 | orchestrator | 2025-09-19 11:59:46 | INFO  | Adding tag os:cirros 2025-09-19 11:59:50.877641 | orchestrator | 2025-09-19 11:59:46 | INFO  | Setting property architecture: x86_64 2025-09-19 11:59:50.877652 | orchestrator | 2025-09-19 11:59:46 | INFO  | Setting property hw_disk_bus: scsi 2025-09-19 11:59:50.877662 | orchestrator | 2025-09-19 11:59:46 | INFO  | Setting property hw_rng_model: virtio 2025-09-19 11:59:50.877673 | orchestrator | 2025-09-19 11:59:46 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-19 11:59:50.877684 | orchestrator | 2025-09-19 11:59:47 | INFO  | Setting property hw_watchdog_action: reset 2025-09-19 11:59:50.877695 | orchestrator | 2025-09-19 11:59:47 | INFO  | Setting property hypervisor_type: qemu 2025-09-19 11:59:50.877705 | orchestrator | 2025-09-19 11:59:47 | INFO  | Setting property os_distro: cirros 2025-09-19 11:59:50.877726 | orchestrator | 2025-09-19 11:59:47 | INFO  | Setting property os_purpose: minimal 2025-09-19 11:59:50.877737 | orchestrator | 2025-09-19 11:59:48 | INFO  | Setting property replace_frequency: never 2025-09-19 11:59:50.877748 | orchestrator | 2025-09-19 11:59:48 | INFO  | Setting property uuid_validity: none 2025-09-19 11:59:50.877759 | orchestrator | 2025-09-19 11:59:48 | INFO  | Setting property provided_until: none 2025-09-19 11:59:50.877769 | orchestrator | 2025-09-19 11:59:48 | INFO  | Setting property image_description: Cirros 2025-09-19 11:59:50.877780 | orchestrator | 2025-09-19 11:59:48 | INFO  | Setting property image_name: Cirros 2025-09-19 11:59:50.877791 | orchestrator | 2025-09-19 11:59:49 | INFO  | Setting property internal_version: 0.6.3 2025-09-19 11:59:50.877802 | orchestrator | 2025-09-19 11:59:49 | INFO  | Setting property image_original_user: cirros 2025-09-19 11:59:50.877812 | orchestrator | 2025-09-19 11:59:49 | INFO  | Setting property os_version: 0.6.3 2025-09-19 11:59:50.877823 | orchestrator | 2025-09-19 11:59:49 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-19 11:59:50.877834 | orchestrator | 2025-09-19 11:59:49 | INFO  | Setting property image_build_date: 2024-09-26 2025-09-19 11:59:50.877851 | orchestrator | 2025-09-19 11:59:50 | INFO  | Checking status of 'Cirros 0.6.3' 2025-09-19 11:59:50.877862 | orchestrator | 2025-09-19 11:59:50 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-09-19 11:59:50.877873 | orchestrator | 2025-09-19 11:59:50 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-09-19 11:59:51.161161 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-09-19 11:59:53.213885 | orchestrator | 2025-09-19 11:59:53 | INFO  | date: 2025-09-19 2025-09-19 11:59:53.213987 | orchestrator | 2025-09-19 11:59:53 | INFO  | image: octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 11:59:53.214256 | orchestrator | 2025-09-19 11:59:53 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 11:59:53.214762 | orchestrator | 2025-09-19 11:59:53 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2.CHECKSUM 2025-09-19 11:59:53.253917 | orchestrator | 2025-09-19 11:59:53 | INFO  | checksum: cb1f8a9bf0aeb0e92074b04499e688b0043001241167a8bf8df49931cc66885f 2025-09-19 11:59:53.322001 | orchestrator | 2025-09-19 11:59:53 | INFO  | It takes a moment until task e472d90b-2cd7-4e06-b08e-e3d3251aae00 (image-manager) has been started and output is visible here. 2025-09-19 12:00:54.883737 | orchestrator | 2025-09-19 11:59:55 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-09-19' 2025-09-19 12:00:54.883857 | orchestrator | 2025-09-19 11:59:55 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2: 200 2025-09-19 12:00:54.883879 | orchestrator | 2025-09-19 11:59:55 | INFO  | Importing image OpenStack Octavia Amphora 2025-09-19 2025-09-19 12:00:54.883892 | orchestrator | 2025-09-19 11:59:55 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 12:00:54.883905 | orchestrator | 2025-09-19 11:59:56 | INFO  | Waiting for image to leave queued state... 2025-09-19 12:00:54.883917 | orchestrator | 2025-09-19 11:59:58 | INFO  | Waiting for import to complete... 2025-09-19 12:00:54.883952 | orchestrator | 2025-09-19 12:00:09 | INFO  | Waiting for import to complete... 2025-09-19 12:00:54.883963 | orchestrator | 2025-09-19 12:00:19 | INFO  | Waiting for import to complete... 2025-09-19 12:00:54.883974 | orchestrator | 2025-09-19 12:00:29 | INFO  | Waiting for import to complete... 2025-09-19 12:00:54.883984 | orchestrator | 2025-09-19 12:00:39 | INFO  | Waiting for import to complete... 2025-09-19 12:00:54.884022 | orchestrator | 2025-09-19 12:00:49 | INFO  | Import of 'OpenStack Octavia Amphora 2025-09-19' successfully completed, reloading images 2025-09-19 12:00:54.884034 | orchestrator | 2025-09-19 12:00:50 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-09-19' 2025-09-19 12:00:54.884045 | orchestrator | 2025-09-19 12:00:50 | INFO  | Setting internal_version = 2025-09-19 2025-09-19 12:00:54.884056 | orchestrator | 2025-09-19 12:00:50 | INFO  | Setting image_original_user = ubuntu 2025-09-19 12:00:54.884068 | orchestrator | 2025-09-19 12:00:50 | INFO  | Adding tag amphora 2025-09-19 12:00:54.884078 | orchestrator | 2025-09-19 12:00:50 | INFO  | Adding tag os:ubuntu 2025-09-19 12:00:54.884089 | orchestrator | 2025-09-19 12:00:50 | INFO  | Setting property architecture: x86_64 2025-09-19 12:00:54.884100 | orchestrator | 2025-09-19 12:00:50 | INFO  | Setting property hw_disk_bus: scsi 2025-09-19 12:00:54.884110 | orchestrator | 2025-09-19 12:00:50 | INFO  | Setting property hw_rng_model: virtio 2025-09-19 12:00:54.884121 | orchestrator | 2025-09-19 12:00:51 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-19 12:00:54.884146 | orchestrator | 2025-09-19 12:00:51 | INFO  | Setting property hw_watchdog_action: reset 2025-09-19 12:00:54.884158 | orchestrator | 2025-09-19 12:00:51 | INFO  | Setting property hypervisor_type: qemu 2025-09-19 12:00:54.884168 | orchestrator | 2025-09-19 12:00:51 | INFO  | Setting property os_distro: ubuntu 2025-09-19 12:00:54.884179 | orchestrator | 2025-09-19 12:00:52 | INFO  | Setting property replace_frequency: quarterly 2025-09-19 12:00:54.884190 | orchestrator | 2025-09-19 12:00:52 | INFO  | Setting property uuid_validity: last-1 2025-09-19 12:00:54.884200 | orchestrator | 2025-09-19 12:00:52 | INFO  | Setting property provided_until: none 2025-09-19 12:00:54.884211 | orchestrator | 2025-09-19 12:00:52 | INFO  | Setting property os_purpose: network 2025-09-19 12:00:54.884221 | orchestrator | 2025-09-19 12:00:52 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-09-19 12:00:54.884232 | orchestrator | 2025-09-19 12:00:53 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-09-19 12:00:54.884243 | orchestrator | 2025-09-19 12:00:53 | INFO  | Setting property internal_version: 2025-09-19 2025-09-19 12:00:54.884253 | orchestrator | 2025-09-19 12:00:53 | INFO  | Setting property image_original_user: ubuntu 2025-09-19 12:00:54.884264 | orchestrator | 2025-09-19 12:00:53 | INFO  | Setting property os_version: 2025-09-19 2025-09-19 12:00:54.884275 | orchestrator | 2025-09-19 12:00:54 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 12:00:54.884286 | orchestrator | 2025-09-19 12:00:54 | INFO  | Setting property image_build_date: 2025-09-19 2025-09-19 12:00:54.884297 | orchestrator | 2025-09-19 12:00:54 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-09-19' 2025-09-19 12:00:54.884307 | orchestrator | 2025-09-19 12:00:54 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-09-19' 2025-09-19 12:00:54.884343 | orchestrator | 2025-09-19 12:00:54 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-09-19 12:00:54.884355 | orchestrator | 2025-09-19 12:00:54 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-09-19 12:00:54.884368 | orchestrator | 2025-09-19 12:00:54 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-09-19 12:00:54.884379 | orchestrator | 2025-09-19 12:00:54 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-09-19 12:00:55.505010 | orchestrator | ok: Runtime: 0:03:12.005849 2025-09-19 12:00:55.568338 | 2025-09-19 12:00:55.568474 | TASK [Run checks] 2025-09-19 12:00:56.273726 | orchestrator | + set -e 2025-09-19 12:00:56.273950 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 12:00:56.273977 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 12:00:56.273998 | orchestrator | ++ INTERACTIVE=false 2025-09-19 12:00:56.274087 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 12:00:56.274102 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 12:00:56.274116 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-19 12:00:56.274915 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-19 12:00:56.281723 | orchestrator | 2025-09-19 12:00:56.281774 | orchestrator | # CHECK 2025-09-19 12:00:56.281786 | orchestrator | 2025-09-19 12:00:56.281798 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 12:00:56.281814 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 12:00:56.281826 | orchestrator | + echo 2025-09-19 12:00:56.281837 | orchestrator | + echo '# CHECK' 2025-09-19 12:00:56.281848 | orchestrator | + echo 2025-09-19 12:00:56.281863 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 12:00:56.282395 | orchestrator | ++ semver latest 5.0.0 2025-09-19 12:00:56.353657 | orchestrator | 2025-09-19 12:00:56.353763 | orchestrator | ## Containers @ testbed-manager 2025-09-19 12:00:56.353778 | orchestrator | 2025-09-19 12:00:56.353802 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-19 12:00:56.353815 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 12:00:56.353826 | orchestrator | + echo 2025-09-19 12:00:56.353838 | orchestrator | + echo '## Containers @ testbed-manager' 2025-09-19 12:00:56.353850 | orchestrator | + echo 2025-09-19 12:00:56.353861 | orchestrator | + osism container testbed-manager ps 2025-09-19 12:00:58.656514 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 12:00:58.656636 | orchestrator | 079cabd80a09 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2025-09-19 12:00:58.656672 | orchestrator | 524df1ba7ae8 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2025-09-19 12:00:58.656691 | orchestrator | f47d2b2e1d2d registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-09-19 12:00:58.656703 | orchestrator | 80a68e4de874 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-09-19 12:00:58.656714 | orchestrator | 15ee07f641ad registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2025-09-19 12:00:58.656731 | orchestrator | c175805aec0d registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 17 minutes cephclient 2025-09-19 12:00:58.656743 | orchestrator | 2fbaa1fa94f6 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-09-19 12:00:58.656755 | orchestrator | 91353e14f132 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-09-19 12:00:58.656767 | orchestrator | 59f5050471ce registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-09-19 12:00:58.656815 | orchestrator | 3fcc46b23fe7 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 30 minutes ago Up 30 minutes (healthy) 80/tcp phpmyadmin 2025-09-19 12:00:58.656828 | orchestrator | 7ea1ce84ba14 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 31 minutes ago Up 30 minutes openstackclient 2025-09-19 12:00:58.656839 | orchestrator | 00369d6d913c registry.osism.tech/osism/homer:v25.08.1 "/bin/sh /entrypoint…" 31 minutes ago Up 30 minutes (healthy) 8080/tcp homer 2025-09-19 12:00:58.656851 | orchestrator | 4c6c91548157 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 53 minutes ago Up 52 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-09-19 12:00:58.656863 | orchestrator | 69f3884a47e7 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 57 minutes ago Up 36 minutes (healthy) manager-inventory_reconciler-1 2025-09-19 12:00:58.656874 | orchestrator | ca81457780e6 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 57 minutes ago Up 37 minutes (healthy) ceph-ansible 2025-09-19 12:00:58.656906 | orchestrator | 1b290981ddeb registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 57 minutes ago Up 37 minutes (healthy) kolla-ansible 2025-09-19 12:00:58.656923 | orchestrator | 88afbed00645 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 57 minutes ago Up 37 minutes (healthy) osism-kubernetes 2025-09-19 12:00:58.656935 | orchestrator | ec9d0a3b4fb8 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 57 minutes ago Up 37 minutes (healthy) osism-ansible 2025-09-19 12:00:58.656947 | orchestrator | b075ca64c0d5 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 57 minutes ago Up 37 minutes (healthy) 8000/tcp manager-ara-server-1 2025-09-19 12:00:58.656958 | orchestrator | 0b29fe50d4b6 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 57 minutes ago Up 37 minutes (healthy) osismclient 2025-09-19 12:00:58.656970 | orchestrator | df1ab62bb338 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" 57 minutes ago Up 37 minutes (healthy) 6379/tcp manager-redis-1 2025-09-19 12:00:58.656981 | orchestrator | f776856401e4 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-09-19 12:00:58.656993 | orchestrator | 7b92833b401f registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-flower-1 2025-09-19 12:00:58.657004 | orchestrator | 02f5e5c52465 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-openstack-1 2025-09-19 12:00:58.657049 | orchestrator | 2dee571103e4 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-beat-1 2025-09-19 12:00:58.657062 | orchestrator | 624da7b86050 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" 57 minutes ago Up 37 minutes (healthy) 3306/tcp manager-mariadb-1 2025-09-19 12:00:58.657074 | orchestrator | 8e72f068470b registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 57 minutes ago Up 37 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2025-09-19 12:00:58.657085 | orchestrator | 145de7a2d222 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-listener-1 2025-09-19 12:00:58.657097 | orchestrator | f032ef6cb125 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 59 minutes ago Up 59 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-09-19 12:00:58.963781 | orchestrator | 2025-09-19 12:00:58.963892 | orchestrator | ## Images @ testbed-manager 2025-09-19 12:00:58.963908 | orchestrator | 2025-09-19 12:00:58.963920 | orchestrator | + echo 2025-09-19 12:00:58.963932 | orchestrator | + echo '## Images @ testbed-manager' 2025-09-19 12:00:58.963944 | orchestrator | + echo 2025-09-19 12:00:58.963955 | orchestrator | + osism container testbed-manager images 2025-09-19 12:01:01.348029 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 12:01:01.348206 | orchestrator | registry.osism.tech/osism/osism-ansible latest f96a8a84ca6e 2 hours ago 594MB 2025-09-19 12:01:01.348234 | orchestrator | registry.osism.tech/osism/osism latest caf71a42605c 3 hours ago 325MB 2025-09-19 12:01:01.348278 | orchestrator | registry.osism.tech/osism/osism-frontend latest 0e15c54d8d9c 3 hours ago 236MB 2025-09-19 12:01:01.348299 | orchestrator | registry.osism.tech/osism/homer v25.08.1 8c383e1d56e2 9 hours ago 11.5MB 2025-09-19 12:01:01.348317 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 84cc807d7f93 9 hours ago 243MB 2025-09-19 12:01:01.348334 | orchestrator | registry.osism.tech/osism/cephclient reef 89fec8934dce 9 hours ago 453MB 2025-09-19 12:01:01.348351 | orchestrator | registry.osism.tech/kolla/cron 2024.2 704de7ec9f25 10 hours ago 320MB 2025-09-19 12:01:01.348372 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 1adafff72696 10 hours ago 631MB 2025-09-19 12:01:01.348392 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 88da420ad3bb 10 hours ago 748MB 2025-09-19 12:01:01.348411 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 3e7c6c197ac3 10 hours ago 459MB 2025-09-19 12:01:01.348429 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 4d720d677fef 10 hours ago 363MB 2025-09-19 12:01:01.348448 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 bb9bb451bb9e 10 hours ago 412MB 2025-09-19 12:01:01.348466 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 84f15bd5d79b 10 hours ago 894MB 2025-09-19 12:01:01.348484 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 3a7a363e61d4 10 hours ago 360MB 2025-09-19 12:01:01.348502 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 9f643559a7a5 12 hours ago 589MB 2025-09-19 12:01:01.348549 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest b451f465ea51 12 hours ago 1.22GB 2025-09-19 12:01:01.348567 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 2fa59ab2ac91 12 hours ago 543MB 2025-09-19 12:01:01.348579 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 013533981ce6 12 hours ago 315MB 2025-09-19 12:01:01.348590 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 3 weeks ago 275MB 2025-09-19 12:01:01.348600 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.3 48f7ae354376 6 weeks ago 329MB 2025-09-19 12:01:01.348611 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 weeks ago 226MB 2025-09-19 12:01:01.348621 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 2 months ago 41.4MB 2025-09-19 12:01:01.348632 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 7 months ago 571MB 2025-09-19 12:01:01.348643 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 15 months ago 146MB 2025-09-19 12:01:01.671702 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 12:01:01.672400 | orchestrator | ++ semver latest 5.0.0 2025-09-19 12:01:01.734153 | orchestrator | 2025-09-19 12:01:01.734211 | orchestrator | ## Containers @ testbed-node-0 2025-09-19 12:01:01.734221 | orchestrator | 2025-09-19 12:01:01.734228 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-19 12:01:01.734234 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 12:01:01.734241 | orchestrator | + echo 2025-09-19 12:01:01.734248 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-09-19 12:01:01.734255 | orchestrator | + echo 2025-09-19 12:01:01.734262 | orchestrator | + osism container testbed-node-0 ps 2025-09-19 12:01:03.886372 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 12:01:03.886469 | orchestrator | 9b83b467296e registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-09-19 12:01:03.886484 | orchestrator | 6d8b3b966ca2 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-09-19 12:01:03.886496 | orchestrator | fb66efe106e4 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-09-19 12:01:03.886791 | orchestrator | 4e6b0a662ac8 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-09-19 12:01:03.886810 | orchestrator | 832706dfb5ae registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-09-19 12:01:03.886840 | orchestrator | 1a8b8f1c5949 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-09-19 12:01:03.886852 | orchestrator | aeb35632d947 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-09-19 12:01:03.886864 | orchestrator | 384f2c3e27a4 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-09-19 12:01:03.886875 | orchestrator | 9d3c69dca354 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-09-19 12:01:03.886886 | orchestrator | bb1ffb16ac07 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-09-19 12:01:03.886918 | orchestrator | 1e788fea8a9a registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-09-19 12:01:03.886930 | orchestrator | 6b644ce7d3d8 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-09-19 12:01:03.886940 | orchestrator | ad5ba0991898 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-09-19 12:01:03.886951 | orchestrator | 1eb839726768 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-09-19 12:01:03.886963 | orchestrator | 62ee6e19384c registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-09-19 12:01:03.886974 | orchestrator | 6c753800c7ea registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-09-19 12:01:03.886985 | orchestrator | 813cf5f715f9 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-09-19 12:01:03.886995 | orchestrator | e79190d75af5 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2025-09-19 12:01:03.887424 | orchestrator | fd28717dd1d3 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-09-19 12:01:03.887441 | orchestrator | 08fca558ab3c registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-09-19 12:01:03.887452 | orchestrator | fdde83ef21cc registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-09-19 12:01:03.888441 | orchestrator | a699680fb934 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-09-19 12:01:03.888464 | orchestrator | 5bb9b1bcaa2e registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-19 12:01:03.888478 | orchestrator | 8bc9ac754e4a registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-09-19 12:01:03.888496 | orchestrator | a9e9fefe81f7 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-09-19 12:01:03.888507 | orchestrator | fb384f6a3ca0 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-09-19 12:01:03.888523 | orchestrator | 866b804ecfcc registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-09-19 12:01:03.888535 | orchestrator | df02291a598c registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-09-19 12:01:03.888546 | orchestrator | c2893d88faf8 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-09-19 12:01:03.888557 | orchestrator | e3a4119f7896 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-09-19 12:01:03.888578 | orchestrator | 99d3e7fe0def registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2025-09-19 12:01:03.888589 | orchestrator | 0ce75dbaa794 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2025-09-19 12:01:03.888600 | orchestrator | ab7593a6216a registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-09-19 12:01:03.888611 | orchestrator | 0202d063e686 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-09-19 12:01:03.888622 | orchestrator | 1b2cf2d88b29 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-09-19 12:01:03.888633 | orchestrator | cad5b1b0c50e registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-09-19 12:01:03.888644 | orchestrator | 64d1d40a0309 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-09-19 12:01:03.888655 | orchestrator | 069c230904d7 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-09-19 12:01:03.888666 | orchestrator | 5959b3793984 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-09-19 12:01:03.888677 | orchestrator | 585f4494508c registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2025-09-19 12:01:03.888688 | orchestrator | 1147c3e74151 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-09-19 12:01:03.888699 | orchestrator | 504cbf13ef56 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-09-19 12:01:03.888710 | orchestrator | c50e4a4403db registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-09-19 12:01:03.888721 | orchestrator | 51793ec54de0 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-09-19 12:01:03.888743 | orchestrator | 7bda4868b4cc registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-09-19 12:01:03.888754 | orchestrator | b2073ff7ba0f registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-09-19 12:01:03.888765 | orchestrator | f3ea54de07ec registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-09-19 12:01:03.888776 | orchestrator | b7ae04b18f31 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-0 2025-09-19 12:01:03.888792 | orchestrator | 31b29d149b3c registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-09-19 12:01:03.888809 | orchestrator | 371aef003bf4 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-09-19 12:01:03.888820 | orchestrator | 9a0d21feb7c6 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-09-19 12:01:03.888831 | orchestrator | 03f539287cad registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-09-19 12:01:03.888842 | orchestrator | f311507012a3 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-09-19 12:01:03.888853 | orchestrator | 706193fdb43c registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-09-19 12:01:03.888864 | orchestrator | d7579d2c5607 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-09-19 12:01:03.888875 | orchestrator | 265ae1c99461 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-09-19 12:01:03.888886 | orchestrator | 123995e96a56 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-09-19 12:01:04.088140 | orchestrator | 2025-09-19 12:01:04.088221 | orchestrator | ## Images @ testbed-node-0 2025-09-19 12:01:04.088235 | orchestrator | 2025-09-19 12:01:04.088247 | orchestrator | + echo 2025-09-19 12:01:04.088259 | orchestrator | + echo '## Images @ testbed-node-0' 2025-09-19 12:01:04.088271 | orchestrator | + echo 2025-09-19 12:01:04.088282 | orchestrator | + osism container testbed-node-0 images 2025-09-19 12:01:06.176639 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 12:01:06.177326 | orchestrator | registry.osism.tech/osism/ceph-daemon reef e5544776978f 9 hours ago 1.27GB 2025-09-19 12:01:06.177354 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 f1db521913fc 10 hours ago 321MB 2025-09-19 12:01:06.177365 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 8188ac43bfc9 10 hours ago 1.59GB 2025-09-19 12:01:06.177375 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 97c04a33606a 10 hours ago 1.56GB 2025-09-19 12:01:06.177384 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a49525aaa7c8 10 hours ago 420MB 2025-09-19 12:01:06.177394 | orchestrator | registry.osism.tech/kolla/cron 2024.2 704de7ec9f25 10 hours ago 320MB 2025-09-19 12:01:06.177403 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 01693a8e2538 10 hours ago 377MB 2025-09-19 12:01:06.177413 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 1adafff72696 10 hours ago 631MB 2025-09-19 12:01:06.177440 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 db3bd122416e 10 hours ago 331MB 2025-09-19 12:01:06.177450 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 d6dab43ba5a0 10 hours ago 328MB 2025-09-19 12:01:06.177461 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3c9521a5ec98 10 hours ago 1.05GB 2025-09-19 12:01:06.177470 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 88da420ad3bb 10 hours ago 748MB 2025-09-19 12:01:06.177480 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 3dc19243d77e 10 hours ago 356MB 2025-09-19 12:01:06.177489 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 bb9bb451bb9e 10 hours ago 412MB 2025-09-19 12:01:06.177499 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 a7082db9abd9 10 hours ago 347MB 2025-09-19 12:01:06.177524 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 67f3232669cb 10 hours ago 353MB 2025-09-19 12:01:06.177534 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 3a7a363e61d4 10 hours ago 360MB 2025-09-19 12:01:06.177543 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 af547c4efd0a 10 hours ago 327MB 2025-09-19 12:01:06.177553 | orchestrator | registry.osism.tech/kolla/redis 2024.2 e8d42a6f6117 10 hours ago 327MB 2025-09-19 12:01:06.177562 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 3962eb463fa6 10 hours ago 364MB 2025-09-19 12:01:06.177571 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a7e1d4e47ed5 10 hours ago 364MB 2025-09-19 12:01:06.177581 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 ee281b442e34 10 hours ago 593MB 2025-09-19 12:01:06.177590 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 7ad0c090b6ff 10 hours ago 1.21GB 2025-09-19 12:01:06.177600 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 c1cc8c6d6e0b 10 hours ago 949MB 2025-09-19 12:01:06.177609 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 00a30ff3320e 10 hours ago 949MB 2025-09-19 12:01:06.177619 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 87e1722d6fde 10 hours ago 949MB 2025-09-19 12:01:06.177628 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 cea5532286f5 10 hours ago 949MB 2025-09-19 12:01:06.177638 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 619fa8ab46ad 10 hours ago 1.04GB 2025-09-19 12:01:06.177647 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 387ffb26bd8e 10 hours ago 1.04GB 2025-09-19 12:01:06.177657 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 b940b00e6d28 10 hours ago 1.11GB 2025-09-19 12:01:06.177666 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 d8d2bdbcdfc8 10 hours ago 1.16GB 2025-09-19 12:01:06.177676 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 c96704a81666 10 hours ago 1.11GB 2025-09-19 12:01:06.177685 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 f879e1c6c1ac 10 hours ago 1.25GB 2025-09-19 12:01:06.177695 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 26c8527840b5 10 hours ago 1.3GB 2025-09-19 12:01:06.177704 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 46560f333102 10 hours ago 1.42GB 2025-09-19 12:01:06.177714 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 0bbf0d830f2c 10 hours ago 1.3GB 2025-09-19 12:01:06.177743 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 5be7024268fb 10 hours ago 1.3GB 2025-09-19 12:01:06.177753 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 e1644a6555a9 10 hours ago 1.2GB 2025-09-19 12:01:06.177762 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 1606d013ebc6 10 hours ago 1.31GB 2025-09-19 12:01:06.177772 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 ffb397cc0bd3 10 hours ago 1.41GB 2025-09-19 12:01:06.177781 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4bd40dedaba8 10 hours ago 1.41GB 2025-09-19 12:01:06.177791 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 b71b58e951f1 10 hours ago 1.15GB 2025-09-19 12:01:06.177800 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 fa2b88da8bbb 10 hours ago 1.04GB 2025-09-19 12:01:06.177809 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 6f510c22ea81 10 hours ago 1.06GB 2025-09-19 12:01:06.177819 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 31c40f05f58a 10 hours ago 1.06GB 2025-09-19 12:01:06.177834 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 47b1a92e4b1f 10 hours ago 1.06GB 2025-09-19 12:01:06.177843 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 a1b2f088447a 10 hours ago 1.06GB 2025-09-19 12:01:06.177853 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 1d0aefc5bb6e 10 hours ago 1.05GB 2025-09-19 12:01:06.177862 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 1259920e02c4 10 hours ago 1.05GB 2025-09-19 12:01:06.177872 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 a20f3aaf2468 10 hours ago 1.05GB 2025-09-19 12:01:06.177881 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 76e17c28b726 10 hours ago 1.06GB 2025-09-19 12:01:06.177891 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 3fc356e89cf7 10 hours ago 1.05GB 2025-09-19 12:01:06.177900 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 9b2119f96562 10 hours ago 1.04GB 2025-09-19 12:01:06.177910 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 06200c7f46a8 10 hours ago 1.04GB 2025-09-19 12:01:06.177919 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 ccbc5d4c2242 10 hours ago 1.04GB 2025-09-19 12:01:06.177929 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 d4bc84d0863f 10 hours ago 1.04GB 2025-09-19 12:01:06.177938 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 ddeb761bf282 10 hours ago 1.12GB 2025-09-19 12:01:06.177948 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 30edefb98e4a 10 hours ago 1.11GB 2025-09-19 12:01:06.177957 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 f84f7ee7f274 10 hours ago 1.1GB 2025-09-19 12:01:06.177967 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 3796447736e1 10 hours ago 1.12GB 2025-09-19 12:01:06.177976 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 7807fcc26d86 10 hours ago 1.1GB 2025-09-19 12:01:06.177986 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 5e883a748a98 10 hours ago 1.1GB 2025-09-19 12:01:06.177996 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 fb223671de61 10 hours ago 1.12GB 2025-09-19 12:01:06.413066 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 12:01:06.413513 | orchestrator | ++ semver latest 5.0.0 2025-09-19 12:01:06.463709 | orchestrator | 2025-09-19 12:01:06.463790 | orchestrator | ## Containers @ testbed-node-1 2025-09-19 12:01:06.463805 | orchestrator | 2025-09-19 12:01:06.463816 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-19 12:01:06.463827 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 12:01:06.463839 | orchestrator | + echo 2025-09-19 12:01:06.463850 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-09-19 12:01:06.463862 | orchestrator | + echo 2025-09-19 12:01:06.463873 | orchestrator | + osism container testbed-node-1 ps 2025-09-19 12:01:08.626296 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 12:01:08.626387 | orchestrator | b011ef9aac6d registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-09-19 12:01:08.626404 | orchestrator | c262359c66f6 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-09-19 12:01:08.626417 | orchestrator | 8c8450b5ff6d registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-09-19 12:01:08.626429 | orchestrator | 48a9fe05ebf8 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-09-19 12:01:08.626461 | orchestrator | 57c350e4f528 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-09-19 12:01:08.626473 | orchestrator | 5e29fca1b36b registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-09-19 12:01:08.626484 | orchestrator | 4d2bcc29bbc9 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-09-19 12:01:08.626495 | orchestrator | 31a064a12647 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-09-19 12:01:08.626506 | orchestrator | 259bfbebd379 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-09-19 12:01:08.626517 | orchestrator | 6772461e8404 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-09-19 12:01:08.626528 | orchestrator | ae5adeb531a3 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-09-19 12:01:08.626539 | orchestrator | 3e4917a68f95 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-09-19 12:01:08.626550 | orchestrator | 25f95237bb05 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-09-19 12:01:08.626565 | orchestrator | 1161921d0343 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-09-19 12:01:08.626576 | orchestrator | 224ecba93a02 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-09-19 12:01:08.626587 | orchestrator | 24929cafbdca registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-09-19 12:01:08.626598 | orchestrator | 278c2c0ffa6f registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-09-19 12:01:08.626609 | orchestrator | 1db9e2fc4523 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2025-09-19 12:01:08.626621 | orchestrator | 488dff3b3673 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-09-19 12:01:08.626632 | orchestrator | 190ddc96ae57 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-09-19 12:01:08.626649 | orchestrator | 1ae2065ca3b0 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-09-19 12:01:08.626679 | orchestrator | 603613e0b10a registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-09-19 12:01:08.626691 | orchestrator | 9c46519cb423 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-19 12:01:08.626710 | orchestrator | 6f023531f16e registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-09-19 12:01:08.626721 | orchestrator | c7ff9a272494 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-09-19 12:01:08.626733 | orchestrator | 2e364218337c registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-09-19 12:01:08.626744 | orchestrator | 906852d1a76a registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-09-19 12:01:08.626755 | orchestrator | 5095c89ad156 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-09-19 12:01:08.626766 | orchestrator | ce79dbb3592d registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-09-19 12:01:08.626777 | orchestrator | 0cd475687985 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-09-19 12:01:08.626788 | orchestrator | 9a37ee2ea874 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-09-19 12:01:08.626799 | orchestrator | f88ec66767d2 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2025-09-19 12:01:08.626810 | orchestrator | f5be17a46cc8 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-09-19 12:01:08.626821 | orchestrator | ac28c600f4b4 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-09-19 12:01:08.626832 | orchestrator | da043bf04857 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 17 minutes (healthy) horizon 2025-09-19 12:01:08.626843 | orchestrator | 2ffa4a668175 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-09-19 12:01:08.626854 | orchestrator | a7d2bbc8507e registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-09-19 12:01:08.626865 | orchestrator | b493a9729650 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-09-19 12:01:08.626876 | orchestrator | b3703058723a registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-09-19 12:01:08.626887 | orchestrator | bc74e7c3504f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2025-09-19 12:01:08.626898 | orchestrator | 98210ae81e7a registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-09-19 12:01:08.626909 | orchestrator | cb6f39267e30 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-09-19 12:01:08.626920 | orchestrator | 6e945839249a registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-09-19 12:01:08.626945 | orchestrator | c8e3d558d6aa registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_northd 2025-09-19 12:01:08.626963 | orchestrator | 478740d46dc9 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2025-09-19 12:01:08.626975 | orchestrator | 4886d13f8150 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-09-19 12:01:08.626986 | orchestrator | 1aded7e1f76c registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-09-19 12:01:08.626997 | orchestrator | b823e8f1d137 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-09-19 12:01:08.627008 | orchestrator | 9c0c326713db registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-1 2025-09-19 12:01:08.627019 | orchestrator | ab2a9379bc63 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-09-19 12:01:08.627030 | orchestrator | 4b6900513df4 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-09-19 12:01:08.627041 | orchestrator | 30af9e04ceb6 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-09-19 12:01:08.627052 | orchestrator | ad54ad93dfdc registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-09-19 12:01:08.627063 | orchestrator | b3c945779848 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-09-19 12:01:08.627074 | orchestrator | 20f02fea6b9e registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-09-19 12:01:08.627105 | orchestrator | 52fbe72de7d8 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-09-19 12:01:08.627117 | orchestrator | 9ca7b23ceb94 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-09-19 12:01:08.836623 | orchestrator | 2025-09-19 12:01:08.836703 | orchestrator | ## Images @ testbed-node-1 2025-09-19 12:01:08.836718 | orchestrator | 2025-09-19 12:01:08.836730 | orchestrator | + echo 2025-09-19 12:01:08.836743 | orchestrator | + echo '## Images @ testbed-node-1' 2025-09-19 12:01:08.836755 | orchestrator | + echo 2025-09-19 12:01:08.836767 | orchestrator | + osism container testbed-node-1 images 2025-09-19 12:01:11.153012 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 12:01:11.153164 | orchestrator | registry.osism.tech/osism/ceph-daemon reef e5544776978f 9 hours ago 1.27GB 2025-09-19 12:01:11.153181 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 f1db521913fc 10 hours ago 321MB 2025-09-19 12:01:11.153193 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 8188ac43bfc9 10 hours ago 1.59GB 2025-09-19 12:01:11.153205 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 97c04a33606a 10 hours ago 1.56GB 2025-09-19 12:01:11.153216 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a49525aaa7c8 10 hours ago 420MB 2025-09-19 12:01:11.153269 | orchestrator | registry.osism.tech/kolla/cron 2024.2 704de7ec9f25 10 hours ago 320MB 2025-09-19 12:01:11.153290 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 01693a8e2538 10 hours ago 377MB 2025-09-19 12:01:11.153307 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 1adafff72696 10 hours ago 631MB 2025-09-19 12:01:11.153325 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 db3bd122416e 10 hours ago 331MB 2025-09-19 12:01:11.153345 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 d6dab43ba5a0 10 hours ago 328MB 2025-09-19 12:01:11.153362 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3c9521a5ec98 10 hours ago 1.05GB 2025-09-19 12:01:11.153380 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 88da420ad3bb 10 hours ago 748MB 2025-09-19 12:01:11.153391 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 3dc19243d77e 10 hours ago 356MB 2025-09-19 12:01:11.153402 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 bb9bb451bb9e 10 hours ago 412MB 2025-09-19 12:01:11.153413 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 a7082db9abd9 10 hours ago 347MB 2025-09-19 12:01:11.153424 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 67f3232669cb 10 hours ago 353MB 2025-09-19 12:01:11.153435 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 3a7a363e61d4 10 hours ago 360MB 2025-09-19 12:01:11.153446 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 af547c4efd0a 10 hours ago 327MB 2025-09-19 12:01:11.153457 | orchestrator | registry.osism.tech/kolla/redis 2024.2 e8d42a6f6117 10 hours ago 327MB 2025-09-19 12:01:11.153487 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 3962eb463fa6 10 hours ago 364MB 2025-09-19 12:01:11.153498 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a7e1d4e47ed5 10 hours ago 364MB 2025-09-19 12:01:11.153509 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 ee281b442e34 10 hours ago 593MB 2025-09-19 12:01:11.153525 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 7ad0c090b6ff 10 hours ago 1.21GB 2025-09-19 12:01:11.153536 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 c1cc8c6d6e0b 10 hours ago 949MB 2025-09-19 12:01:11.153547 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 00a30ff3320e 10 hours ago 949MB 2025-09-19 12:01:11.153557 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 87e1722d6fde 10 hours ago 949MB 2025-09-19 12:01:11.153568 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 cea5532286f5 10 hours ago 949MB 2025-09-19 12:01:11.153579 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 b940b00e6d28 10 hours ago 1.11GB 2025-09-19 12:01:11.153590 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 d8d2bdbcdfc8 10 hours ago 1.16GB 2025-09-19 12:01:11.153601 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 c96704a81666 10 hours ago 1.11GB 2025-09-19 12:01:11.153612 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 f879e1c6c1ac 10 hours ago 1.25GB 2025-09-19 12:01:11.153623 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 26c8527840b5 10 hours ago 1.3GB 2025-09-19 12:01:11.153634 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 46560f333102 10 hours ago 1.42GB 2025-09-19 12:01:11.153645 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 0bbf0d830f2c 10 hours ago 1.3GB 2025-09-19 12:01:11.153656 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 5be7024268fb 10 hours ago 1.3GB 2025-09-19 12:01:11.153675 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 e1644a6555a9 10 hours ago 1.2GB 2025-09-19 12:01:11.153706 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 1606d013ebc6 10 hours ago 1.31GB 2025-09-19 12:01:11.153719 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 ffb397cc0bd3 10 hours ago 1.41GB 2025-09-19 12:01:11.153730 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4bd40dedaba8 10 hours ago 1.41GB 2025-09-19 12:01:11.153741 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 b71b58e951f1 10 hours ago 1.15GB 2025-09-19 12:01:11.153752 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 fa2b88da8bbb 10 hours ago 1.04GB 2025-09-19 12:01:11.153762 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 6f510c22ea81 10 hours ago 1.06GB 2025-09-19 12:01:11.153773 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 31c40f05f58a 10 hours ago 1.06GB 2025-09-19 12:01:11.153784 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 47b1a92e4b1f 10 hours ago 1.06GB 2025-09-19 12:01:11.153795 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 a1b2f088447a 10 hours ago 1.06GB 2025-09-19 12:01:11.153805 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 1d0aefc5bb6e 10 hours ago 1.05GB 2025-09-19 12:01:11.153816 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 1259920e02c4 10 hours ago 1.05GB 2025-09-19 12:01:11.153827 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 a20f3aaf2468 10 hours ago 1.05GB 2025-09-19 12:01:11.153838 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 76e17c28b726 10 hours ago 1.06GB 2025-09-19 12:01:11.153849 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 3fc356e89cf7 10 hours ago 1.05GB 2025-09-19 12:01:11.153860 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 f84f7ee7f274 10 hours ago 1.1GB 2025-09-19 12:01:11.153870 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 3796447736e1 10 hours ago 1.12GB 2025-09-19 12:01:11.153881 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 7807fcc26d86 10 hours ago 1.1GB 2025-09-19 12:01:11.153892 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 5e883a748a98 10 hours ago 1.1GB 2025-09-19 12:01:11.153903 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 fb223671de61 10 hours ago 1.12GB 2025-09-19 12:01:11.469518 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 12:01:11.469933 | orchestrator | ++ semver latest 5.0.0 2025-09-19 12:01:11.523081 | orchestrator | 2025-09-19 12:01:11.523216 | orchestrator | ## Containers @ testbed-node-2 2025-09-19 12:01:11.523232 | orchestrator | 2025-09-19 12:01:11.523245 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-19 12:01:11.523264 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 12:01:11.523284 | orchestrator | + echo 2025-09-19 12:01:11.523302 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-09-19 12:01:11.523322 | orchestrator | + echo 2025-09-19 12:01:11.523338 | orchestrator | + osism container testbed-node-2 ps 2025-09-19 12:01:14.003907 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 12:01:14.004036 | orchestrator | 6931a04ee17b registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-09-19 12:01:14.004063 | orchestrator | d791ff460fd8 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-09-19 12:01:14.004084 | orchestrator | a694da2cf5ad registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-09-19 12:01:14.004236 | orchestrator | fe78426bd787 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-09-19 12:01:14.004263 | orchestrator | 534bbad1a648 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-09-19 12:01:14.004305 | orchestrator | 42c927d73aa8 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-09-19 12:01:14.004326 | orchestrator | 684ed0768957 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-09-19 12:01:14.004346 | orchestrator | 9fb40e180236 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-09-19 12:01:14.004364 | orchestrator | 2becf4a4abc2 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-09-19 12:01:14.004382 | orchestrator | f1fbb1619f57 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-09-19 12:01:14.004401 | orchestrator | 6e9b3255122b registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-09-19 12:01:14.004420 | orchestrator | c8eedfd0864f registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-09-19 12:01:14.004439 | orchestrator | 0cbd650c1366 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-09-19 12:01:14.004459 | orchestrator | 5ad2f9aaa439 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-09-19 12:01:14.004478 | orchestrator | 1f5a9bda427f registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-09-19 12:01:14.004496 | orchestrator | f0e4dba09848 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-09-19 12:01:14.004515 | orchestrator | 347d88b5260d registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-09-19 12:01:14.004534 | orchestrator | 81e40d3ed1ac registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2025-09-19 12:01:14.004553 | orchestrator | ff4a2c5b1466 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-09-19 12:01:14.004571 | orchestrator | 5c0e6114ed51 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-09-19 12:01:14.004590 | orchestrator | 5ed05b8a7843 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-09-19 12:01:14.004633 | orchestrator | 5ea5e67c93cf registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-09-19 12:01:14.004667 | orchestrator | 979f79c83cac registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-19 12:01:14.004685 | orchestrator | 140bcd881083 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-09-19 12:01:14.004702 | orchestrator | 589a7f3108dc registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-09-19 12:01:14.004718 | orchestrator | 494f935a1cd1 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-09-19 12:01:14.004735 | orchestrator | 40d33ff5ec8f registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-09-19 12:01:14.004751 | orchestrator | b5d4f6b8fc1a registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-09-19 12:01:14.004768 | orchestrator | 07ce2c058f11 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-09-19 12:01:14.004784 | orchestrator | 80601a960aa8 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-09-19 12:01:14.004800 | orchestrator | 84ac8f19b9b6 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-09-19 12:01:14.004818 | orchestrator | 6ccc0a5d18b5 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-09-19 12:01:14.004833 | orchestrator | 3ec4b380614f registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-09-19 12:01:14.004850 | orchestrator | 44dc4398a89e registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-09-19 12:01:14.004863 | orchestrator | 491f0156e0cb registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-09-19 12:01:14.004873 | orchestrator | d34a39295910 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-09-19 12:01:14.004882 | orchestrator | 4efffa4aaa0f registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-09-19 12:01:14.004892 | orchestrator | 66d331bcb838 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-09-19 12:01:14.004909 | orchestrator | 84275b60f201 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-09-19 12:01:14.004919 | orchestrator | 6eeaf2b99e67 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2025-09-19 12:01:14.004929 | orchestrator | aba0826e5b1b registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-09-19 12:01:14.004938 | orchestrator | d6e950095a45 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-09-19 12:01:14.004968 | orchestrator | e17ce24fc967 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-09-19 12:01:14.004978 | orchestrator | c9d17badcb8c registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_northd 2025-09-19 12:01:14.004995 | orchestrator | 3cf57d17fe58 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-09-19 12:01:14.005005 | orchestrator | e8b57f212e2f registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-09-19 12:01:14.005020 | orchestrator | 36074081b5b6 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-09-19 12:01:14.005030 | orchestrator | 39d30266f500 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-09-19 12:01:14.005040 | orchestrator | cce834b89f89 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-2 2025-09-19 12:01:14.005050 | orchestrator | c3a0a722c69f registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-09-19 12:01:14.005060 | orchestrator | d5522f873db2 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-09-19 12:01:14.005069 | orchestrator | bbb36a50bed3 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-09-19 12:01:14.005079 | orchestrator | c3ef1021b48b registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-09-19 12:01:14.005089 | orchestrator | 823e4f9be0ea registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-09-19 12:01:14.005099 | orchestrator | e5ae29d88fe7 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-09-19 12:01:14.005109 | orchestrator | 03ddd557165e registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-09-19 12:01:14.005142 | orchestrator | 710128ab6a47 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-09-19 12:01:14.420970 | orchestrator | 2025-09-19 12:01:14.421049 | orchestrator | ## Images @ testbed-node-2 2025-09-19 12:01:14.421060 | orchestrator | 2025-09-19 12:01:14.421069 | orchestrator | + echo 2025-09-19 12:01:14.421078 | orchestrator | + echo '## Images @ testbed-node-2' 2025-09-19 12:01:14.421087 | orchestrator | + echo 2025-09-19 12:01:14.421095 | orchestrator | + osism container testbed-node-2 images 2025-09-19 12:01:16.864227 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 12:01:16.864291 | orchestrator | registry.osism.tech/osism/ceph-daemon reef e5544776978f 9 hours ago 1.27GB 2025-09-19 12:01:16.864304 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 f1db521913fc 10 hours ago 321MB 2025-09-19 12:01:16.864315 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 8188ac43bfc9 10 hours ago 1.59GB 2025-09-19 12:01:16.864326 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 97c04a33606a 10 hours ago 1.56GB 2025-09-19 12:01:16.864363 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a49525aaa7c8 10 hours ago 420MB 2025-09-19 12:01:16.864374 | orchestrator | registry.osism.tech/kolla/cron 2024.2 704de7ec9f25 10 hours ago 320MB 2025-09-19 12:01:16.864385 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 01693a8e2538 10 hours ago 377MB 2025-09-19 12:01:16.864396 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 1adafff72696 10 hours ago 631MB 2025-09-19 12:01:16.864406 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 db3bd122416e 10 hours ago 331MB 2025-09-19 12:01:16.864417 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 d6dab43ba5a0 10 hours ago 328MB 2025-09-19 12:01:16.864428 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3c9521a5ec98 10 hours ago 1.05GB 2025-09-19 12:01:16.864439 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 88da420ad3bb 10 hours ago 748MB 2025-09-19 12:01:16.864449 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 3dc19243d77e 10 hours ago 356MB 2025-09-19 12:01:16.864460 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 bb9bb451bb9e 10 hours ago 412MB 2025-09-19 12:01:16.864470 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 a7082db9abd9 10 hours ago 347MB 2025-09-19 12:01:16.864481 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 67f3232669cb 10 hours ago 353MB 2025-09-19 12:01:16.864492 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 3a7a363e61d4 10 hours ago 360MB 2025-09-19 12:01:16.864503 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 af547c4efd0a 10 hours ago 327MB 2025-09-19 12:01:16.864513 | orchestrator | registry.osism.tech/kolla/redis 2024.2 e8d42a6f6117 10 hours ago 327MB 2025-09-19 12:01:16.864524 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 3962eb463fa6 10 hours ago 364MB 2025-09-19 12:01:16.864535 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a7e1d4e47ed5 10 hours ago 364MB 2025-09-19 12:01:16.864546 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 ee281b442e34 10 hours ago 593MB 2025-09-19 12:01:16.864556 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 7ad0c090b6ff 10 hours ago 1.21GB 2025-09-19 12:01:16.864567 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 c1cc8c6d6e0b 10 hours ago 949MB 2025-09-19 12:01:16.864578 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 00a30ff3320e 10 hours ago 949MB 2025-09-19 12:01:16.864589 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 87e1722d6fde 10 hours ago 949MB 2025-09-19 12:01:16.864599 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 cea5532286f5 10 hours ago 949MB 2025-09-19 12:01:16.864610 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 b940b00e6d28 10 hours ago 1.11GB 2025-09-19 12:01:16.864620 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 d8d2bdbcdfc8 10 hours ago 1.16GB 2025-09-19 12:01:16.864647 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 c96704a81666 10 hours ago 1.11GB 2025-09-19 12:01:16.864658 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 f879e1c6c1ac 10 hours ago 1.25GB 2025-09-19 12:01:16.864669 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 26c8527840b5 10 hours ago 1.3GB 2025-09-19 12:01:16.864680 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 46560f333102 10 hours ago 1.42GB 2025-09-19 12:01:16.864690 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 0bbf0d830f2c 10 hours ago 1.3GB 2025-09-19 12:01:16.864708 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 5be7024268fb 10 hours ago 1.3GB 2025-09-19 12:01:16.864719 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 e1644a6555a9 10 hours ago 1.2GB 2025-09-19 12:01:16.864743 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 1606d013ebc6 10 hours ago 1.31GB 2025-09-19 12:01:16.864754 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 ffb397cc0bd3 10 hours ago 1.41GB 2025-09-19 12:01:16.864768 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4bd40dedaba8 10 hours ago 1.41GB 2025-09-19 12:01:16.864781 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 b71b58e951f1 10 hours ago 1.15GB 2025-09-19 12:01:16.864793 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 fa2b88da8bbb 10 hours ago 1.04GB 2025-09-19 12:01:16.864805 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 6f510c22ea81 10 hours ago 1.06GB 2025-09-19 12:01:16.864818 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 31c40f05f58a 10 hours ago 1.06GB 2025-09-19 12:01:16.864830 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 47b1a92e4b1f 10 hours ago 1.06GB 2025-09-19 12:01:16.864843 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 a1b2f088447a 10 hours ago 1.06GB 2025-09-19 12:01:16.864855 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 1d0aefc5bb6e 10 hours ago 1.05GB 2025-09-19 12:01:16.864868 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 1259920e02c4 10 hours ago 1.05GB 2025-09-19 12:01:16.864880 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 a20f3aaf2468 10 hours ago 1.05GB 2025-09-19 12:01:16.864892 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 76e17c28b726 10 hours ago 1.06GB 2025-09-19 12:01:16.864905 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 3fc356e89cf7 10 hours ago 1.05GB 2025-09-19 12:01:16.864922 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 f84f7ee7f274 10 hours ago 1.1GB 2025-09-19 12:01:16.864941 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 3796447736e1 10 hours ago 1.12GB 2025-09-19 12:01:16.864960 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 7807fcc26d86 10 hours ago 1.1GB 2025-09-19 12:01:16.864978 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 5e883a748a98 10 hours ago 1.1GB 2025-09-19 12:01:16.864996 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 fb223671de61 10 hours ago 1.12GB 2025-09-19 12:01:17.165975 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-09-19 12:01:17.173066 | orchestrator | + set -e 2025-09-19 12:01:17.173132 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 12:01:17.173880 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 12:01:17.173905 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 12:01:17.173918 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 12:01:17.173931 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 12:01:17.173944 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 12:01:17.173958 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 12:01:17.173971 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 12:01:17.173983 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 12:01:17.173995 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 12:01:17.174008 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 12:01:17.174063 | orchestrator | ++ export ARA=false 2025-09-19 12:01:17.174075 | orchestrator | ++ ARA=false 2025-09-19 12:01:17.174086 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 12:01:17.174101 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 12:01:17.174113 | orchestrator | ++ export TEMPEST=false 2025-09-19 12:01:17.174125 | orchestrator | ++ TEMPEST=false 2025-09-19 12:01:17.174135 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 12:01:17.174190 | orchestrator | ++ IS_ZUUL=true 2025-09-19 12:01:17.174201 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.121 2025-09-19 12:01:17.174212 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.121 2025-09-19 12:01:17.174223 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 12:01:17.174234 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 12:01:17.174245 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 12:01:17.174256 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 12:01:17.174267 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 12:01:17.174277 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 12:01:17.174288 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 12:01:17.174299 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 12:01:17.174310 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-19 12:01:17.174321 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-09-19 12:01:17.184646 | orchestrator | + set -e 2025-09-19 12:01:17.184704 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 12:01:17.184713 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 12:01:17.184720 | orchestrator | ++ INTERACTIVE=false 2025-09-19 12:01:17.184726 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 12:01:17.184732 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 12:01:17.184773 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-19 12:01:17.185871 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-19 12:01:17.192339 | orchestrator | 2025-09-19 12:01:17.192375 | orchestrator | # Ceph status 2025-09-19 12:01:17.192383 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 12:01:17.192391 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 12:01:17.192398 | orchestrator | + echo 2025-09-19 12:01:17.192404 | orchestrator | + echo '# Ceph status' 2025-09-19 12:01:17.192620 | orchestrator | 2025-09-19 12:01:17.192632 | orchestrator | + echo 2025-09-19 12:01:17.192638 | orchestrator | + ceph -s 2025-09-19 12:01:17.769406 | orchestrator | cluster: 2025-09-19 12:01:17.769504 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-09-19 12:01:17.769519 | orchestrator | health: HEALTH_OK 2025-09-19 12:01:17.769532 | orchestrator | 2025-09-19 12:01:17.769544 | orchestrator | services: 2025-09-19 12:01:17.769555 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 27m) 2025-09-19 12:01:17.769568 | orchestrator | mgr: testbed-node-2(active, since 15m), standbys: testbed-node-1, testbed-node-0 2025-09-19 12:01:17.769579 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-09-19 12:01:17.769591 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 24m) 2025-09-19 12:01:17.769602 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-09-19 12:01:17.769614 | orchestrator | 2025-09-19 12:01:17.769625 | orchestrator | data: 2025-09-19 12:01:17.769635 | orchestrator | volumes: 1/1 healthy 2025-09-19 12:01:17.769646 | orchestrator | pools: 14 pools, 401 pgs 2025-09-19 12:01:17.769657 | orchestrator | objects: 523 objects, 2.2 GiB 2025-09-19 12:01:17.769669 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-09-19 12:01:17.769680 | orchestrator | pgs: 401 active+clean 2025-09-19 12:01:17.769691 | orchestrator | 2025-09-19 12:01:17.814274 | orchestrator | 2025-09-19 12:01:17.814367 | orchestrator | # Ceph versions 2025-09-19 12:01:17.814383 | orchestrator | 2025-09-19 12:01:17.814396 | orchestrator | + echo 2025-09-19 12:01:17.814407 | orchestrator | + echo '# Ceph versions' 2025-09-19 12:01:17.814419 | orchestrator | + echo 2025-09-19 12:01:17.814430 | orchestrator | + ceph versions 2025-09-19 12:01:18.392506 | orchestrator | { 2025-09-19 12:01:18.392578 | orchestrator | "mon": { 2025-09-19 12:01:18.392585 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 12:01:18.392591 | orchestrator | }, 2025-09-19 12:01:18.392596 | orchestrator | "mgr": { 2025-09-19 12:01:18.392601 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 12:01:18.392605 | orchestrator | }, 2025-09-19 12:01:18.392610 | orchestrator | "osd": { 2025-09-19 12:01:18.392614 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-09-19 12:01:18.392618 | orchestrator | }, 2025-09-19 12:01:18.392623 | orchestrator | "mds": { 2025-09-19 12:01:18.392627 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 12:01:18.392631 | orchestrator | }, 2025-09-19 12:01:18.392636 | orchestrator | "rgw": { 2025-09-19 12:01:18.392640 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 12:01:18.392663 | orchestrator | }, 2025-09-19 12:01:18.392668 | orchestrator | "overall": { 2025-09-19 12:01:18.392673 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-09-19 12:01:18.392677 | orchestrator | } 2025-09-19 12:01:18.392681 | orchestrator | } 2025-09-19 12:01:18.445664 | orchestrator | 2025-09-19 12:01:18.445726 | orchestrator | # Ceph OSD tree 2025-09-19 12:01:18.445732 | orchestrator | 2025-09-19 12:01:18.445737 | orchestrator | + echo 2025-09-19 12:01:18.445742 | orchestrator | + echo '# Ceph OSD tree' 2025-09-19 12:01:18.445747 | orchestrator | + echo 2025-09-19 12:01:18.445755 | orchestrator | + ceph osd df tree 2025-09-19 12:01:18.945572 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-09-19 12:01:18.945689 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-09-19 12:01:18.945704 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-09-19 12:01:18.945716 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.5 GiB 1 KiB 70 MiB 18 GiB 7.74 1.31 200 up osd.0 2025-09-19 12:01:18.945728 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 836 MiB 763 MiB 1 KiB 74 MiB 19 GiB 4.09 0.69 190 up osd.4 2025-09-19 12:01:18.945740 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-09-19 12:01:18.945751 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 949 MiB 875 MiB 1 KiB 74 MiB 19 GiB 4.64 0.78 176 up osd.1 2025-09-19 12:01:18.945762 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.20 1.22 216 up osd.3 2025-09-19 12:01:18.945774 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-09-19 12:01:18.945785 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.22 1.05 191 up osd.2 2025-09-19 12:01:18.945797 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.62 0.95 197 up osd.5 2025-09-19 12:01:18.945808 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-09-19 12:01:18.945820 | orchestrator | MIN/MAX VAR: 0.69/1.31 STDDEV: 1.30 2025-09-19 12:01:18.995757 | orchestrator | 2025-09-19 12:01:18.995788 | orchestrator | # Ceph monitor status 2025-09-19 12:01:18.995801 | orchestrator | 2025-09-19 12:01:18.995813 | orchestrator | + echo 2025-09-19 12:01:18.995824 | orchestrator | + echo '# Ceph monitor status' 2025-09-19 12:01:18.995835 | orchestrator | + echo 2025-09-19 12:01:18.995847 | orchestrator | + ceph mon stat 2025-09-19 12:01:19.585450 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-09-19 12:01:19.629959 | orchestrator | 2025-09-19 12:01:19.630087 | orchestrator | # Ceph quorum status 2025-09-19 12:01:19.630104 | orchestrator | 2025-09-19 12:01:19.630116 | orchestrator | + echo 2025-09-19 12:01:19.630128 | orchestrator | + echo '# Ceph quorum status' 2025-09-19 12:01:19.630140 | orchestrator | + echo 2025-09-19 12:01:19.630398 | orchestrator | + ceph quorum_status 2025-09-19 12:01:19.630421 | orchestrator | + jq 2025-09-19 12:01:20.224707 | orchestrator | { 2025-09-19 12:01:20.224810 | orchestrator | "election_epoch": 8, 2025-09-19 12:01:20.224826 | orchestrator | "quorum": [ 2025-09-19 12:01:20.224839 | orchestrator | 0, 2025-09-19 12:01:20.224850 | orchestrator | 1, 2025-09-19 12:01:20.224862 | orchestrator | 2 2025-09-19 12:01:20.224873 | orchestrator | ], 2025-09-19 12:01:20.224884 | orchestrator | "quorum_names": [ 2025-09-19 12:01:20.224895 | orchestrator | "testbed-node-0", 2025-09-19 12:01:20.224907 | orchestrator | "testbed-node-1", 2025-09-19 12:01:20.224918 | orchestrator | "testbed-node-2" 2025-09-19 12:01:20.224930 | orchestrator | ], 2025-09-19 12:01:20.224967 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-09-19 12:01:20.224979 | orchestrator | "quorum_age": 1652, 2025-09-19 12:01:20.224990 | orchestrator | "features": { 2025-09-19 12:01:20.225002 | orchestrator | "quorum_con": "4540138322906710015", 2025-09-19 12:01:20.225013 | orchestrator | "quorum_mon": [ 2025-09-19 12:01:20.225024 | orchestrator | "kraken", 2025-09-19 12:01:20.225035 | orchestrator | "luminous", 2025-09-19 12:01:20.225046 | orchestrator | "mimic", 2025-09-19 12:01:20.225057 | orchestrator | "osdmap-prune", 2025-09-19 12:01:20.225068 | orchestrator | "nautilus", 2025-09-19 12:01:20.225078 | orchestrator | "octopus", 2025-09-19 12:01:20.225090 | orchestrator | "pacific", 2025-09-19 12:01:20.225101 | orchestrator | "elector-pinging", 2025-09-19 12:01:20.225111 | orchestrator | "quincy", 2025-09-19 12:01:20.225122 | orchestrator | "reef" 2025-09-19 12:01:20.225133 | orchestrator | ] 2025-09-19 12:01:20.225144 | orchestrator | }, 2025-09-19 12:01:20.225155 | orchestrator | "monmap": { 2025-09-19 12:01:20.225216 | orchestrator | "epoch": 1, 2025-09-19 12:01:20.225227 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-09-19 12:01:20.225238 | orchestrator | "modified": "2025-09-19T11:33:29.433243Z", 2025-09-19 12:01:20.225306 | orchestrator | "created": "2025-09-19T11:33:29.433243Z", 2025-09-19 12:01:20.225320 | orchestrator | "min_mon_release": 18, 2025-09-19 12:01:20.225332 | orchestrator | "min_mon_release_name": "reef", 2025-09-19 12:01:20.225345 | orchestrator | "election_strategy": 1, 2025-09-19 12:01:20.225357 | orchestrator | "disallowed_leaders: ": "", 2025-09-19 12:01:20.225369 | orchestrator | "stretch_mode": false, 2025-09-19 12:01:20.225381 | orchestrator | "tiebreaker_mon": "", 2025-09-19 12:01:20.225392 | orchestrator | "removed_ranks: ": "", 2025-09-19 12:01:20.225405 | orchestrator | "features": { 2025-09-19 12:01:20.225417 | orchestrator | "persistent": [ 2025-09-19 12:01:20.225429 | orchestrator | "kraken", 2025-09-19 12:01:20.225441 | orchestrator | "luminous", 2025-09-19 12:01:20.225453 | orchestrator | "mimic", 2025-09-19 12:01:20.225465 | orchestrator | "osdmap-prune", 2025-09-19 12:01:20.225476 | orchestrator | "nautilus", 2025-09-19 12:01:20.225488 | orchestrator | "octopus", 2025-09-19 12:01:20.225501 | orchestrator | "pacific", 2025-09-19 12:01:20.225513 | orchestrator | "elector-pinging", 2025-09-19 12:01:20.225526 | orchestrator | "quincy", 2025-09-19 12:01:20.225538 | orchestrator | "reef" 2025-09-19 12:01:20.225550 | orchestrator | ], 2025-09-19 12:01:20.225562 | orchestrator | "optional": [] 2025-09-19 12:01:20.225575 | orchestrator | }, 2025-09-19 12:01:20.225586 | orchestrator | "mons": [ 2025-09-19 12:01:20.225597 | orchestrator | { 2025-09-19 12:01:20.225608 | orchestrator | "rank": 0, 2025-09-19 12:01:20.225619 | orchestrator | "name": "testbed-node-0", 2025-09-19 12:01:20.225630 | orchestrator | "public_addrs": { 2025-09-19 12:01:20.225641 | orchestrator | "addrvec": [ 2025-09-19 12:01:20.225651 | orchestrator | { 2025-09-19 12:01:20.225662 | orchestrator | "type": "v2", 2025-09-19 12:01:20.225673 | orchestrator | "addr": "192.168.16.10:3300", 2025-09-19 12:01:20.225683 | orchestrator | "nonce": 0 2025-09-19 12:01:20.225695 | orchestrator | }, 2025-09-19 12:01:20.225705 | orchestrator | { 2025-09-19 12:01:20.225716 | orchestrator | "type": "v1", 2025-09-19 12:01:20.225727 | orchestrator | "addr": "192.168.16.10:6789", 2025-09-19 12:01:20.225737 | orchestrator | "nonce": 0 2025-09-19 12:01:20.225748 | orchestrator | } 2025-09-19 12:01:20.225759 | orchestrator | ] 2025-09-19 12:01:20.225770 | orchestrator | }, 2025-09-19 12:01:20.225781 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-09-19 12:01:20.225791 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-09-19 12:01:20.225802 | orchestrator | "priority": 0, 2025-09-19 12:01:20.225813 | orchestrator | "weight": 0, 2025-09-19 12:01:20.225824 | orchestrator | "crush_location": "{}" 2025-09-19 12:01:20.225834 | orchestrator | }, 2025-09-19 12:01:20.225845 | orchestrator | { 2025-09-19 12:01:20.225856 | orchestrator | "rank": 1, 2025-09-19 12:01:20.225867 | orchestrator | "name": "testbed-node-1", 2025-09-19 12:01:20.225878 | orchestrator | "public_addrs": { 2025-09-19 12:01:20.225888 | orchestrator | "addrvec": [ 2025-09-19 12:01:20.225899 | orchestrator | { 2025-09-19 12:01:20.225909 | orchestrator | "type": "v2", 2025-09-19 12:01:20.225920 | orchestrator | "addr": "192.168.16.11:3300", 2025-09-19 12:01:20.225931 | orchestrator | "nonce": 0 2025-09-19 12:01:20.225942 | orchestrator | }, 2025-09-19 12:01:20.225953 | orchestrator | { 2025-09-19 12:01:20.225974 | orchestrator | "type": "v1", 2025-09-19 12:01:20.225985 | orchestrator | "addr": "192.168.16.11:6789", 2025-09-19 12:01:20.225996 | orchestrator | "nonce": 0 2025-09-19 12:01:20.226007 | orchestrator | } 2025-09-19 12:01:20.226070 | orchestrator | ] 2025-09-19 12:01:20.226083 | orchestrator | }, 2025-09-19 12:01:20.226094 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-09-19 12:01:20.226105 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-09-19 12:01:20.226116 | orchestrator | "priority": 0, 2025-09-19 12:01:20.226126 | orchestrator | "weight": 0, 2025-09-19 12:01:20.226137 | orchestrator | "crush_location": "{}" 2025-09-19 12:01:20.226148 | orchestrator | }, 2025-09-19 12:01:20.226176 | orchestrator | { 2025-09-19 12:01:20.226188 | orchestrator | "rank": 2, 2025-09-19 12:01:20.226199 | orchestrator | "name": "testbed-node-2", 2025-09-19 12:01:20.226210 | orchestrator | "public_addrs": { 2025-09-19 12:01:20.226221 | orchestrator | "addrvec": [ 2025-09-19 12:01:20.226231 | orchestrator | { 2025-09-19 12:01:20.226242 | orchestrator | "type": "v2", 2025-09-19 12:01:20.226253 | orchestrator | "addr": "192.168.16.12:3300", 2025-09-19 12:01:20.226264 | orchestrator | "nonce": 0 2025-09-19 12:01:20.226275 | orchestrator | }, 2025-09-19 12:01:20.226286 | orchestrator | { 2025-09-19 12:01:20.226296 | orchestrator | "type": "v1", 2025-09-19 12:01:20.226307 | orchestrator | "addr": "192.168.16.12:6789", 2025-09-19 12:01:20.226317 | orchestrator | "nonce": 0 2025-09-19 12:01:20.226328 | orchestrator | } 2025-09-19 12:01:20.226339 | orchestrator | ] 2025-09-19 12:01:20.226350 | orchestrator | }, 2025-09-19 12:01:20.226360 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-09-19 12:01:20.226372 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-09-19 12:01:20.226382 | orchestrator | "priority": 0, 2025-09-19 12:01:20.226393 | orchestrator | "weight": 0, 2025-09-19 12:01:20.226404 | orchestrator | "crush_location": "{}" 2025-09-19 12:01:20.226415 | orchestrator | } 2025-09-19 12:01:20.226425 | orchestrator | ] 2025-09-19 12:01:20.226436 | orchestrator | } 2025-09-19 12:01:20.226446 | orchestrator | } 2025-09-19 12:01:20.226470 | orchestrator | 2025-09-19 12:01:20.226482 | orchestrator | # Ceph free space status 2025-09-19 12:01:20.226493 | orchestrator | 2025-09-19 12:01:20.226504 | orchestrator | + echo 2025-09-19 12:01:20.226514 | orchestrator | + echo '# Ceph free space status' 2025-09-19 12:01:20.226525 | orchestrator | + echo 2025-09-19 12:01:20.226536 | orchestrator | + ceph df 2025-09-19 12:01:20.803023 | orchestrator | --- RAW STORAGE --- 2025-09-19 12:01:20.803128 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-09-19 12:01:20.803158 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-09-19 12:01:20.803196 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-09-19 12:01:20.803209 | orchestrator | 2025-09-19 12:01:20.803220 | orchestrator | --- POOLS --- 2025-09-19 12:01:20.803232 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-09-19 12:01:20.803244 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-09-19 12:01:20.803268 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-09-19 12:01:20.803279 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-09-19 12:01:20.803290 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-09-19 12:01:20.803301 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-09-19 12:01:20.803312 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-09-19 12:01:20.803323 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-09-19 12:01:20.803334 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-09-19 12:01:20.803344 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 52 GiB 2025-09-19 12:01:20.803355 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 12:01:20.803366 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 12:01:20.803377 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.98 35 GiB 2025-09-19 12:01:20.803387 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 12:01:20.803427 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 12:01:20.850778 | orchestrator | ++ semver latest 5.0.0 2025-09-19 12:01:20.908208 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-19 12:01:20.908278 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 12:01:20.908292 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-09-19 12:01:20.908303 | orchestrator | + osism apply facts 2025-09-19 12:01:33.091551 | orchestrator | 2025-09-19 12:01:33 | INFO  | Task 511091ae-e050-4f38-a9ff-6a2fd2cd9bd4 (facts) was prepared for execution. 2025-09-19 12:01:33.091678 | orchestrator | 2025-09-19 12:01:33 | INFO  | It takes a moment until task 511091ae-e050-4f38-a9ff-6a2fd2cd9bd4 (facts) has been started and output is visible here. 2025-09-19 12:01:46.592577 | orchestrator | 2025-09-19 12:01:46.592694 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-19 12:01:46.592703 | orchestrator | 2025-09-19 12:01:46.592718 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 12:01:46.592724 | orchestrator | Friday 19 September 2025 12:01:37 +0000 (0:00:00.276) 0:00:00.276 ****** 2025-09-19 12:01:46.592730 | orchestrator | ok: [testbed-manager] 2025-09-19 12:01:46.592736 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:01:46.592742 | orchestrator | ok: [testbed-node-1] 2025-09-19 12:01:46.592747 | orchestrator | ok: [testbed-node-2] 2025-09-19 12:01:46.592753 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:01:46.592758 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:01:46.592763 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:01:46.592768 | orchestrator | 2025-09-19 12:01:46.592772 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 12:01:46.592777 | orchestrator | Friday 19 September 2025 12:01:38 +0000 (0:00:01.433) 0:00:01.709 ****** 2025-09-19 12:01:46.592783 | orchestrator | skipping: [testbed-manager] 2025-09-19 12:01:46.592788 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:01:46.592793 | orchestrator | skipping: [testbed-node-1] 2025-09-19 12:01:46.592798 | orchestrator | skipping: [testbed-node-2] 2025-09-19 12:01:46.592803 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:01:46.592808 | orchestrator | skipping: [testbed-node-4] 2025-09-19 12:01:46.592813 | orchestrator | skipping: [testbed-node-5] 2025-09-19 12:01:46.592817 | orchestrator | 2025-09-19 12:01:46.592822 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 12:01:46.592827 | orchestrator | 2025-09-19 12:01:46.592832 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 12:01:46.592837 | orchestrator | Friday 19 September 2025 12:01:39 +0000 (0:00:01.198) 0:00:02.908 ****** 2025-09-19 12:01:46.592842 | orchestrator | ok: [testbed-node-1] 2025-09-19 12:01:46.592847 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:01:46.592852 | orchestrator | ok: [testbed-node-2] 2025-09-19 12:01:46.592857 | orchestrator | ok: [testbed-manager] 2025-09-19 12:01:46.592861 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:01:46.592866 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:01:46.592871 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:01:46.592876 | orchestrator | 2025-09-19 12:01:46.592881 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 12:01:46.592886 | orchestrator | 2025-09-19 12:01:46.592890 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 12:01:46.592895 | orchestrator | Friday 19 September 2025 12:01:45 +0000 (0:00:05.739) 0:00:08.647 ****** 2025-09-19 12:01:46.592900 | orchestrator | skipping: [testbed-manager] 2025-09-19 12:01:46.592905 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:01:46.592910 | orchestrator | skipping: [testbed-node-1] 2025-09-19 12:01:46.592914 | orchestrator | skipping: [testbed-node-2] 2025-09-19 12:01:46.592919 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:01:46.592924 | orchestrator | skipping: [testbed-node-4] 2025-09-19 12:01:46.592929 | orchestrator | skipping: [testbed-node-5] 2025-09-19 12:01:46.592934 | orchestrator | 2025-09-19 12:01:46.592939 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 12:01:46.592970 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 12:01:46.592976 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 12:01:46.592981 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 12:01:46.592986 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 12:01:46.592991 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 12:01:46.592996 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 12:01:46.593001 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 12:01:46.593006 | orchestrator | 2025-09-19 12:01:46.593011 | orchestrator | 2025-09-19 12:01:46.593016 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 12:01:46.593021 | orchestrator | Friday 19 September 2025 12:01:46 +0000 (0:00:00.571) 0:00:09.218 ****** 2025-09-19 12:01:46.593027 | orchestrator | =============================================================================== 2025-09-19 12:01:46.593031 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.74s 2025-09-19 12:01:46.593036 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.43s 2025-09-19 12:01:46.593041 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2025-09-19 12:01:46.593063 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2025-09-19 12:01:46.894428 | orchestrator | + osism validate ceph-mons 2025-09-19 12:02:19.055730 | orchestrator | 2025-09-19 12:02:19.055850 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-09-19 12:02:19.055864 | orchestrator | 2025-09-19 12:02:19.055874 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-19 12:02:19.055883 | orchestrator | Friday 19 September 2025 12:02:03 +0000 (0:00:00.439) 0:00:00.439 ****** 2025-09-19 12:02:19.055893 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 12:02:19.055902 | orchestrator | 2025-09-19 12:02:19.055911 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-19 12:02:19.055921 | orchestrator | Friday 19 September 2025 12:02:04 +0000 (0:00:00.672) 0:00:01.112 ****** 2025-09-19 12:02:19.055937 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 12:02:19.055952 | orchestrator | 2025-09-19 12:02:19.055966 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-19 12:02:19.055981 | orchestrator | Friday 19 September 2025 12:02:05 +0000 (0:00:00.816) 0:00:01.928 ****** 2025-09-19 12:02:19.055995 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:19.056010 | orchestrator | 2025-09-19 12:02:19.056023 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-19 12:02:19.056038 | orchestrator | Friday 19 September 2025 12:02:05 +0000 (0:00:00.236) 0:00:02.165 ****** 2025-09-19 12:02:19.056052 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:19.056067 | orchestrator | ok: [testbed-node-1] 2025-09-19 12:02:19.056082 | orchestrator | ok: [testbed-node-2] 2025-09-19 12:02:19.056095 | orchestrator | 2025-09-19 12:02:19.056108 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-19 12:02:19.056124 | orchestrator | Friday 19 September 2025 12:02:05 +0000 (0:00:00.363) 0:00:02.529 ****** 2025-09-19 12:02:19.056140 | orchestrator | ok: [testbed-node-2] 2025-09-19 12:02:19.056184 | orchestrator | ok: [testbed-node-1] 2025-09-19 12:02:19.056201 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:19.056211 | orchestrator | 2025-09-19 12:02:19.056220 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-19 12:02:19.056229 | orchestrator | Friday 19 September 2025 12:02:06 +0000 (0:00:01.052) 0:00:03.581 ****** 2025-09-19 12:02:19.056238 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:19.056247 | orchestrator | skipping: [testbed-node-1] 2025-09-19 12:02:19.056258 | orchestrator | skipping: [testbed-node-2] 2025-09-19 12:02:19.056268 | orchestrator | 2025-09-19 12:02:19.056278 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-19 12:02:19.056289 | orchestrator | Friday 19 September 2025 12:02:07 +0000 (0:00:00.301) 0:00:03.883 ****** 2025-09-19 12:02:19.056299 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:19.056309 | orchestrator | ok: [testbed-node-1] 2025-09-19 12:02:19.056319 | orchestrator | ok: [testbed-node-2] 2025-09-19 12:02:19.056329 | orchestrator | 2025-09-19 12:02:19.056340 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 12:02:19.056350 | orchestrator | Friday 19 September 2025 12:02:07 +0000 (0:00:00.511) 0:00:04.394 ****** 2025-09-19 12:02:19.056360 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:19.056370 | orchestrator | ok: [testbed-node-1] 2025-09-19 12:02:19.056380 | orchestrator | ok: [testbed-node-2] 2025-09-19 12:02:19.056390 | orchestrator | 2025-09-19 12:02:19.056401 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-09-19 12:02:19.056412 | orchestrator | Friday 19 September 2025 12:02:07 +0000 (0:00:00.293) 0:00:04.688 ****** 2025-09-19 12:02:19.056422 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:19.056433 | orchestrator | skipping: [testbed-node-1] 2025-09-19 12:02:19.056443 | orchestrator | skipping: [testbed-node-2] 2025-09-19 12:02:19.056453 | orchestrator | 2025-09-19 12:02:19.056462 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-09-19 12:02:19.056471 | orchestrator | Friday 19 September 2025 12:02:08 +0000 (0:00:00.330) 0:00:05.019 ****** 2025-09-19 12:02:19.056480 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:19.056488 | orchestrator | ok: [testbed-node-1] 2025-09-19 12:02:19.056521 | orchestrator | ok: [testbed-node-2] 2025-09-19 12:02:19.056530 | orchestrator | 2025-09-19 12:02:19.056539 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 12:02:19.056548 | orchestrator | Friday 19 September 2025 12:02:08 +0000 (0:00:00.307) 0:00:05.327 ****** 2025-09-19 12:02:19.056557 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:19.056565 | orchestrator | 2025-09-19 12:02:19.056574 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 12:02:19.056583 | orchestrator | Friday 19 September 2025 12:02:09 +0000 (0:00:00.706) 0:00:06.033 ****** 2025-09-19 12:02:19.056591 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:19.056600 | orchestrator | 2025-09-19 12:02:19.056609 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 12:02:19.056631 | orchestrator | Friday 19 September 2025 12:02:09 +0000 (0:00:00.268) 0:00:06.302 ****** 2025-09-19 12:02:19.056641 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:19.056649 | orchestrator | 2025-09-19 12:02:19.056658 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:02:19.056667 | orchestrator | Friday 19 September 2025 12:02:09 +0000 (0:00:00.252) 0:00:06.555 ****** 2025-09-19 12:02:19.056676 | orchestrator | 2025-09-19 12:02:19.056691 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:02:19.056711 | orchestrator | Friday 19 September 2025 12:02:09 +0000 (0:00:00.069) 0:00:06.624 ****** 2025-09-19 12:02:19.056730 | orchestrator | 2025-09-19 12:02:19.056744 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:02:19.056758 | orchestrator | Friday 19 September 2025 12:02:09 +0000 (0:00:00.070) 0:00:06.694 ****** 2025-09-19 12:02:19.056772 | orchestrator | 2025-09-19 12:02:19.056787 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 12:02:19.056814 | orchestrator | Friday 19 September 2025 12:02:09 +0000 (0:00:00.070) 0:00:06.765 ****** 2025-09-19 12:02:19.056826 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:19.056840 | orchestrator | 2025-09-19 12:02:19.056853 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-19 12:02:19.056867 | orchestrator | Friday 19 September 2025 12:02:10 +0000 (0:00:00.251) 0:00:07.016 ****** 2025-09-19 12:02:19.056883 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:19.056898 | orchestrator | 2025-09-19 12:02:19.056936 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-09-19 12:02:19.056951 | orchestrator | Friday 19 September 2025 12:02:10 +0000 (0:00:00.255) 0:00:07.272 ****** 2025-09-19 12:02:19.056960 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:19.056969 | orchestrator | 2025-09-19 12:02:19.056978 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-09-19 12:02:19.056986 | orchestrator | Friday 19 September 2025 12:02:10 +0000 (0:00:00.115) 0:00:07.387 ****** 2025-09-19 12:02:19.056995 | orchestrator | changed: [testbed-node-0] 2025-09-19 12:02:19.057004 | orchestrator | 2025-09-19 12:02:19.057013 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-09-19 12:02:19.057021 | orchestrator | Friday 19 September 2025 12:02:12 +0000 (0:00:01.593) 0:00:08.981 ****** 2025-09-19 12:02:19.057030 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:19.057039 | orchestrator | 2025-09-19 12:02:19.057047 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-09-19 12:02:19.057056 | orchestrator | Friday 19 September 2025 12:02:12 +0000 (0:00:00.311) 0:00:09.293 ****** 2025-09-19 12:02:19.057065 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:19.057073 | orchestrator | 2025-09-19 12:02:19.057082 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-09-19 12:02:19.057090 | orchestrator | Friday 19 September 2025 12:02:12 +0000 (0:00:00.315) 0:00:09.609 ****** 2025-09-19 12:02:19.057099 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:19.057108 | orchestrator | 2025-09-19 12:02:19.057116 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-09-19 12:02:19.057125 | orchestrator | Friday 19 September 2025 12:02:13 +0000 (0:00:00.325) 0:00:09.934 ****** 2025-09-19 12:02:19.057134 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:19.057142 | orchestrator | 2025-09-19 12:02:19.057151 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-09-19 12:02:19.057159 | orchestrator | Friday 19 September 2025 12:02:13 +0000 (0:00:00.302) 0:00:10.237 ****** 2025-09-19 12:02:19.057168 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:19.057176 | orchestrator | 2025-09-19 12:02:19.057185 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-09-19 12:02:19.057194 | orchestrator | Friday 19 September 2025 12:02:13 +0000 (0:00:00.123) 0:00:10.360 ****** 2025-09-19 12:02:19.057202 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:19.057211 | orchestrator | 2025-09-19 12:02:19.057220 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-09-19 12:02:19.057228 | orchestrator | Friday 19 September 2025 12:02:13 +0000 (0:00:00.127) 0:00:10.488 ****** 2025-09-19 12:02:19.057237 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:19.057246 | orchestrator | 2025-09-19 12:02:19.057254 | orchestrator | TASK [Gather status data] ****************************************************** 2025-09-19 12:02:19.057263 | orchestrator | Friday 19 September 2025 12:02:13 +0000 (0:00:00.103) 0:00:10.592 ****** 2025-09-19 12:02:19.057272 | orchestrator | changed: [testbed-node-0] 2025-09-19 12:02:19.057280 | orchestrator | 2025-09-19 12:02:19.057289 | orchestrator | TASK [Set health test data] **************************************************** 2025-09-19 12:02:19.057298 | orchestrator | Friday 19 September 2025 12:02:15 +0000 (0:00:01.367) 0:00:11.959 ****** 2025-09-19 12:02:19.057307 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:19.057315 | orchestrator | 2025-09-19 12:02:19.057324 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-09-19 12:02:19.057340 | orchestrator | Friday 19 September 2025 12:02:15 +0000 (0:00:00.328) 0:00:12.288 ****** 2025-09-19 12:02:19.057349 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:19.057357 | orchestrator | 2025-09-19 12:02:19.057366 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-09-19 12:02:19.057375 | orchestrator | Friday 19 September 2025 12:02:15 +0000 (0:00:00.144) 0:00:12.433 ****** 2025-09-19 12:02:19.057383 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:19.057392 | orchestrator | 2025-09-19 12:02:19.057400 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-09-19 12:02:19.057409 | orchestrator | Friday 19 September 2025 12:02:15 +0000 (0:00:00.140) 0:00:12.573 ****** 2025-09-19 12:02:19.057417 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:19.057426 | orchestrator | 2025-09-19 12:02:19.057435 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-09-19 12:02:19.057444 | orchestrator | Friday 19 September 2025 12:02:15 +0000 (0:00:00.151) 0:00:12.725 ****** 2025-09-19 12:02:19.057452 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:19.057461 | orchestrator | 2025-09-19 12:02:19.057469 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-19 12:02:19.057478 | orchestrator | Friday 19 September 2025 12:02:16 +0000 (0:00:00.311) 0:00:13.037 ****** 2025-09-19 12:02:19.057487 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 12:02:19.057527 | orchestrator | 2025-09-19 12:02:19.057537 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-19 12:02:19.057545 | orchestrator | Friday 19 September 2025 12:02:16 +0000 (0:00:00.252) 0:00:13.289 ****** 2025-09-19 12:02:19.057554 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:19.057562 | orchestrator | 2025-09-19 12:02:19.057571 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 12:02:19.057579 | orchestrator | Friday 19 September 2025 12:02:16 +0000 (0:00:00.253) 0:00:13.543 ****** 2025-09-19 12:02:19.057588 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 12:02:19.057597 | orchestrator | 2025-09-19 12:02:19.057605 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 12:02:19.057617 | orchestrator | Friday 19 September 2025 12:02:18 +0000 (0:00:01.617) 0:00:15.161 ****** 2025-09-19 12:02:19.057632 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 12:02:19.057647 | orchestrator | 2025-09-19 12:02:19.057660 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 12:02:19.057674 | orchestrator | Friday 19 September 2025 12:02:18 +0000 (0:00:00.275) 0:00:15.436 ****** 2025-09-19 12:02:19.057689 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 12:02:19.057704 | orchestrator | 2025-09-19 12:02:19.057729 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:02:21.525635 | orchestrator | Friday 19 September 2025 12:02:18 +0000 (0:00:00.252) 0:00:15.689 ****** 2025-09-19 12:02:21.525738 | orchestrator | 2025-09-19 12:02:21.525755 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:02:21.525768 | orchestrator | Friday 19 September 2025 12:02:18 +0000 (0:00:00.069) 0:00:15.758 ****** 2025-09-19 12:02:21.525779 | orchestrator | 2025-09-19 12:02:21.525791 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:02:21.525802 | orchestrator | Friday 19 September 2025 12:02:18 +0000 (0:00:00.068) 0:00:15.827 ****** 2025-09-19 12:02:21.525813 | orchestrator | 2025-09-19 12:02:21.525824 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-19 12:02:21.525836 | orchestrator | Friday 19 September 2025 12:02:19 +0000 (0:00:00.075) 0:00:15.902 ****** 2025-09-19 12:02:21.525847 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 12:02:21.525859 | orchestrator | 2025-09-19 12:02:21.525870 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 12:02:21.525908 | orchestrator | Friday 19 September 2025 12:02:20 +0000 (0:00:01.568) 0:00:17.471 ****** 2025-09-19 12:02:21.525919 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-19 12:02:21.525931 | orchestrator |  "msg": [ 2025-09-19 12:02:21.525944 | orchestrator |  "Validator run completed.", 2025-09-19 12:02:21.525955 | orchestrator |  "You can find the report file here:", 2025-09-19 12:02:21.525967 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-09-19T12:02:04+00:00-report.json", 2025-09-19 12:02:21.525979 | orchestrator |  "on the following host:", 2025-09-19 12:02:21.525990 | orchestrator |  "testbed-manager" 2025-09-19 12:02:21.526080 | orchestrator |  ] 2025-09-19 12:02:21.526096 | orchestrator | } 2025-09-19 12:02:21.526108 | orchestrator | 2025-09-19 12:02:21.526119 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 12:02:21.526134 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 12:02:21.526155 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 12:02:21.526174 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 12:02:21.526192 | orchestrator | 2025-09-19 12:02:21.526210 | orchestrator | 2025-09-19 12:02:21.526228 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 12:02:21.526247 | orchestrator | Friday 19 September 2025 12:02:21 +0000 (0:00:00.604) 0:00:18.076 ****** 2025-09-19 12:02:21.526267 | orchestrator | =============================================================================== 2025-09-19 12:02:21.526287 | orchestrator | Aggregate test results step one ----------------------------------------- 1.62s 2025-09-19 12:02:21.526302 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.59s 2025-09-19 12:02:21.526315 | orchestrator | Write report file ------------------------------------------------------- 1.57s 2025-09-19 12:02:21.526328 | orchestrator | Gather status data ------------------------------------------------------ 1.37s 2025-09-19 12:02:21.526341 | orchestrator | Get container info ------------------------------------------------------ 1.05s 2025-09-19 12:02:21.526353 | orchestrator | Create report output directory ------------------------------------------ 0.82s 2025-09-19 12:02:21.526366 | orchestrator | Aggregate test results step one ----------------------------------------- 0.71s 2025-09-19 12:02:21.526378 | orchestrator | Get timestamp for report file ------------------------------------------- 0.67s 2025-09-19 12:02:21.526391 | orchestrator | Print report file information ------------------------------------------- 0.60s 2025-09-19 12:02:21.526403 | orchestrator | Set test result to passed if container is existing ---------------------- 0.51s 2025-09-19 12:02:21.526417 | orchestrator | Prepare test data for container existance test -------------------------- 0.36s 2025-09-19 12:02:21.526443 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.33s 2025-09-19 12:02:21.526456 | orchestrator | Set health test data ---------------------------------------------------- 0.33s 2025-09-19 12:02:21.526475 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2025-09-19 12:02:21.526487 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.32s 2025-09-19 12:02:21.526498 | orchestrator | Set quorum test data ---------------------------------------------------- 0.31s 2025-09-19 12:02:21.526537 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.31s 2025-09-19 12:02:21.526555 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.31s 2025-09-19 12:02:21.526567 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.30s 2025-09-19 12:02:21.526578 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2025-09-19 12:02:21.817975 | orchestrator | + osism validate ceph-mgrs 2025-09-19 12:02:52.851311 | orchestrator | 2025-09-19 12:02:52.851426 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-09-19 12:02:52.851443 | orchestrator | 2025-09-19 12:02:52.851455 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-19 12:02:52.851467 | orchestrator | Friday 19 September 2025 12:02:38 +0000 (0:00:00.454) 0:00:00.454 ****** 2025-09-19 12:02:52.851478 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 12:02:52.851489 | orchestrator | 2025-09-19 12:02:52.851501 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-19 12:02:52.851512 | orchestrator | Friday 19 September 2025 12:02:38 +0000 (0:00:00.643) 0:00:01.097 ****** 2025-09-19 12:02:52.851522 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 12:02:52.851534 | orchestrator | 2025-09-19 12:02:52.851545 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-19 12:02:52.851555 | orchestrator | Friday 19 September 2025 12:02:39 +0000 (0:00:00.863) 0:00:01.961 ****** 2025-09-19 12:02:52.851567 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:52.851579 | orchestrator | 2025-09-19 12:02:52.851590 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-19 12:02:52.851601 | orchestrator | Friday 19 September 2025 12:02:39 +0000 (0:00:00.252) 0:00:02.213 ****** 2025-09-19 12:02:52.851612 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:52.851623 | orchestrator | ok: [testbed-node-1] 2025-09-19 12:02:52.851634 | orchestrator | ok: [testbed-node-2] 2025-09-19 12:02:52.851645 | orchestrator | 2025-09-19 12:02:52.851656 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-19 12:02:52.851667 | orchestrator | Friday 19 September 2025 12:02:40 +0000 (0:00:00.284) 0:00:02.498 ****** 2025-09-19 12:02:52.851719 | orchestrator | ok: [testbed-node-1] 2025-09-19 12:02:52.851730 | orchestrator | ok: [testbed-node-2] 2025-09-19 12:02:52.851742 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:52.851753 | orchestrator | 2025-09-19 12:02:52.851765 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-19 12:02:52.851776 | orchestrator | Friday 19 September 2025 12:02:41 +0000 (0:00:01.066) 0:00:03.565 ****** 2025-09-19 12:02:52.851788 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:52.851799 | orchestrator | skipping: [testbed-node-1] 2025-09-19 12:02:52.851810 | orchestrator | skipping: [testbed-node-2] 2025-09-19 12:02:52.851821 | orchestrator | 2025-09-19 12:02:52.851832 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-19 12:02:52.851846 | orchestrator | Friday 19 September 2025 12:02:41 +0000 (0:00:00.296) 0:00:03.861 ****** 2025-09-19 12:02:52.851859 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:52.851871 | orchestrator | ok: [testbed-node-1] 2025-09-19 12:02:52.851883 | orchestrator | ok: [testbed-node-2] 2025-09-19 12:02:52.851900 | orchestrator | 2025-09-19 12:02:52.851920 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 12:02:52.851942 | orchestrator | Friday 19 September 2025 12:02:42 +0000 (0:00:00.524) 0:00:04.386 ****** 2025-09-19 12:02:52.851961 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:52.851974 | orchestrator | ok: [testbed-node-1] 2025-09-19 12:02:52.851986 | orchestrator | ok: [testbed-node-2] 2025-09-19 12:02:52.851998 | orchestrator | 2025-09-19 12:02:52.852012 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-09-19 12:02:52.852025 | orchestrator | Friday 19 September 2025 12:02:42 +0000 (0:00:00.312) 0:00:04.698 ****** 2025-09-19 12:02:52.852037 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:52.852050 | orchestrator | skipping: [testbed-node-1] 2025-09-19 12:02:52.852062 | orchestrator | skipping: [testbed-node-2] 2025-09-19 12:02:52.852074 | orchestrator | 2025-09-19 12:02:52.852087 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-09-19 12:02:52.852099 | orchestrator | Friday 19 September 2025 12:02:42 +0000 (0:00:00.290) 0:00:04.989 ****** 2025-09-19 12:02:52.852112 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:52.852151 | orchestrator | ok: [testbed-node-1] 2025-09-19 12:02:52.852164 | orchestrator | ok: [testbed-node-2] 2025-09-19 12:02:52.852176 | orchestrator | 2025-09-19 12:02:52.852189 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 12:02:52.852202 | orchestrator | Friday 19 September 2025 12:02:42 +0000 (0:00:00.307) 0:00:05.296 ****** 2025-09-19 12:02:52.852215 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:52.852228 | orchestrator | 2025-09-19 12:02:52.852241 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 12:02:52.852252 | orchestrator | Friday 19 September 2025 12:02:43 +0000 (0:00:00.649) 0:00:05.946 ****** 2025-09-19 12:02:52.852263 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:52.852274 | orchestrator | 2025-09-19 12:02:52.852285 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 12:02:52.852296 | orchestrator | Friday 19 September 2025 12:02:43 +0000 (0:00:00.260) 0:00:06.206 ****** 2025-09-19 12:02:52.852306 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:52.852317 | orchestrator | 2025-09-19 12:02:52.852328 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:02:52.852339 | orchestrator | Friday 19 September 2025 12:02:44 +0000 (0:00:00.248) 0:00:06.454 ****** 2025-09-19 12:02:52.852350 | orchestrator | 2025-09-19 12:02:52.852361 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:02:52.852371 | orchestrator | Friday 19 September 2025 12:02:44 +0000 (0:00:00.070) 0:00:06.525 ****** 2025-09-19 12:02:52.852382 | orchestrator | 2025-09-19 12:02:52.852408 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:02:52.852419 | orchestrator | Friday 19 September 2025 12:02:44 +0000 (0:00:00.071) 0:00:06.596 ****** 2025-09-19 12:02:52.852430 | orchestrator | 2025-09-19 12:02:52.852441 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 12:02:52.852452 | orchestrator | Friday 19 September 2025 12:02:44 +0000 (0:00:00.072) 0:00:06.668 ****** 2025-09-19 12:02:52.852463 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:52.852474 | orchestrator | 2025-09-19 12:02:52.852484 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-19 12:02:52.852495 | orchestrator | Friday 19 September 2025 12:02:44 +0000 (0:00:00.242) 0:00:06.911 ****** 2025-09-19 12:02:52.852506 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:52.852517 | orchestrator | 2025-09-19 12:02:52.852545 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-09-19 12:02:52.852556 | orchestrator | Friday 19 September 2025 12:02:44 +0000 (0:00:00.264) 0:00:07.176 ****** 2025-09-19 12:02:52.852567 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:52.852578 | orchestrator | 2025-09-19 12:02:52.852589 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-09-19 12:02:52.852600 | orchestrator | Friday 19 September 2025 12:02:44 +0000 (0:00:00.119) 0:00:07.295 ****** 2025-09-19 12:02:52.852610 | orchestrator | changed: [testbed-node-0] 2025-09-19 12:02:52.852621 | orchestrator | 2025-09-19 12:02:52.852632 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-09-19 12:02:52.852659 | orchestrator | Friday 19 September 2025 12:02:46 +0000 (0:00:02.005) 0:00:09.300 ****** 2025-09-19 12:02:52.852671 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:52.852699 | orchestrator | 2025-09-19 12:02:52.852721 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-09-19 12:02:52.852732 | orchestrator | Friday 19 September 2025 12:02:47 +0000 (0:00:00.278) 0:00:09.579 ****** 2025-09-19 12:02:52.852743 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:52.852754 | orchestrator | 2025-09-19 12:02:52.852765 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-09-19 12:02:52.852776 | orchestrator | Friday 19 September 2025 12:02:47 +0000 (0:00:00.702) 0:00:10.282 ****** 2025-09-19 12:02:52.852787 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:52.852798 | orchestrator | 2025-09-19 12:02:52.852809 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-09-19 12:02:52.852830 | orchestrator | Friday 19 September 2025 12:02:48 +0000 (0:00:00.137) 0:00:10.419 ****** 2025-09-19 12:02:52.852841 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:02:52.852852 | orchestrator | 2025-09-19 12:02:52.852863 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-19 12:02:52.852874 | orchestrator | Friday 19 September 2025 12:02:48 +0000 (0:00:00.155) 0:00:10.575 ****** 2025-09-19 12:02:52.852885 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 12:02:52.852896 | orchestrator | 2025-09-19 12:02:52.852907 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-19 12:02:52.852918 | orchestrator | Friday 19 September 2025 12:02:48 +0000 (0:00:00.245) 0:00:10.820 ****** 2025-09-19 12:02:52.852929 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:02:52.852940 | orchestrator | 2025-09-19 12:02:52.852951 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 12:02:52.852962 | orchestrator | Friday 19 September 2025 12:02:48 +0000 (0:00:00.244) 0:00:11.065 ****** 2025-09-19 12:02:52.852973 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 12:02:52.852984 | orchestrator | 2025-09-19 12:02:52.852995 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 12:02:52.853006 | orchestrator | Friday 19 September 2025 12:02:49 +0000 (0:00:01.236) 0:00:12.301 ****** 2025-09-19 12:02:52.853017 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 12:02:52.853028 | orchestrator | 2025-09-19 12:02:52.853039 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 12:02:52.853050 | orchestrator | Friday 19 September 2025 12:02:50 +0000 (0:00:00.247) 0:00:12.549 ****** 2025-09-19 12:02:52.853061 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 12:02:52.853072 | orchestrator | 2025-09-19 12:02:52.853083 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:02:52.853094 | orchestrator | Friday 19 September 2025 12:02:50 +0000 (0:00:00.285) 0:00:12.834 ****** 2025-09-19 12:02:52.853105 | orchestrator | 2025-09-19 12:02:52.853116 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:02:52.853126 | orchestrator | Friday 19 September 2025 12:02:50 +0000 (0:00:00.067) 0:00:12.902 ****** 2025-09-19 12:02:52.853137 | orchestrator | 2025-09-19 12:02:52.853148 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:02:52.853159 | orchestrator | Friday 19 September 2025 12:02:50 +0000 (0:00:00.068) 0:00:12.971 ****** 2025-09-19 12:02:52.853170 | orchestrator | 2025-09-19 12:02:52.853181 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-19 12:02:52.853192 | orchestrator | Friday 19 September 2025 12:02:50 +0000 (0:00:00.072) 0:00:13.043 ****** 2025-09-19 12:02:52.853203 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 12:02:52.853214 | orchestrator | 2025-09-19 12:02:52.853225 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 12:02:52.853236 | orchestrator | Friday 19 September 2025 12:02:52 +0000 (0:00:01.685) 0:00:14.729 ****** 2025-09-19 12:02:52.853247 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-19 12:02:52.853258 | orchestrator |  "msg": [ 2025-09-19 12:02:52.853270 | orchestrator |  "Validator run completed.", 2025-09-19 12:02:52.853281 | orchestrator |  "You can find the report file here:", 2025-09-19 12:02:52.853292 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-09-19T12:02:38+00:00-report.json", 2025-09-19 12:02:52.853304 | orchestrator |  "on the following host:", 2025-09-19 12:02:52.853316 | orchestrator |  "testbed-manager" 2025-09-19 12:02:52.853327 | orchestrator |  ] 2025-09-19 12:02:52.853338 | orchestrator | } 2025-09-19 12:02:52.853350 | orchestrator | 2025-09-19 12:02:52.853361 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 12:02:52.853380 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 12:02:52.853392 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 12:02:52.853411 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 12:02:53.153591 | orchestrator | 2025-09-19 12:02:53.153782 | orchestrator | 2025-09-19 12:02:53.153836 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 12:02:53.153852 | orchestrator | Friday 19 September 2025 12:02:52 +0000 (0:00:00.411) 0:00:15.141 ****** 2025-09-19 12:02:53.153863 | orchestrator | =============================================================================== 2025-09-19 12:02:53.153875 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.01s 2025-09-19 12:02:53.153886 | orchestrator | Write report file ------------------------------------------------------- 1.69s 2025-09-19 12:02:53.153897 | orchestrator | Aggregate test results step one ----------------------------------------- 1.24s 2025-09-19 12:02:53.153907 | orchestrator | Get container info ------------------------------------------------------ 1.07s 2025-09-19 12:02:53.153918 | orchestrator | Create report output directory ------------------------------------------ 0.86s 2025-09-19 12:02:53.153929 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.70s 2025-09-19 12:02:53.153940 | orchestrator | Aggregate test results step one ----------------------------------------- 0.65s 2025-09-19 12:02:53.153951 | orchestrator | Get timestamp for report file ------------------------------------------- 0.64s 2025-09-19 12:02:53.153962 | orchestrator | Set test result to passed if container is existing ---------------------- 0.52s 2025-09-19 12:02:53.153973 | orchestrator | Print report file information ------------------------------------------- 0.41s 2025-09-19 12:02:53.153983 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-09-19 12:02:53.153994 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.31s 2025-09-19 12:02:53.154005 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2025-09-19 12:02:53.154068 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.29s 2025-09-19 12:02:53.154082 | orchestrator | Aggregate test results step three --------------------------------------- 0.29s 2025-09-19 12:02:53.154092 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2025-09-19 12:02:53.154103 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.28s 2025-09-19 12:02:53.154114 | orchestrator | Fail due to missing containers ------------------------------------------ 0.26s 2025-09-19 12:02:53.154125 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2025-09-19 12:02:53.154137 | orchestrator | Define report vars ------------------------------------------------------ 0.25s 2025-09-19 12:02:53.463407 | orchestrator | + osism validate ceph-osds 2025-09-19 12:03:14.058351 | orchestrator | 2025-09-19 12:03:14.058464 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-09-19 12:03:14.058481 | orchestrator | 2025-09-19 12:03:14.058493 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-19 12:03:14.058505 | orchestrator | Friday 19 September 2025 12:03:09 +0000 (0:00:00.483) 0:00:00.483 ****** 2025-09-19 12:03:14.058517 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 12:03:14.058528 | orchestrator | 2025-09-19 12:03:14.058539 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 12:03:14.058551 | orchestrator | Friday 19 September 2025 12:03:10 +0000 (0:00:00.720) 0:00:01.204 ****** 2025-09-19 12:03:14.058563 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 12:03:14.058574 | orchestrator | 2025-09-19 12:03:14.058585 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-19 12:03:14.058622 | orchestrator | Friday 19 September 2025 12:03:10 +0000 (0:00:00.242) 0:00:01.447 ****** 2025-09-19 12:03:14.058634 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 12:03:14.058645 | orchestrator | 2025-09-19 12:03:14.058656 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-19 12:03:14.058667 | orchestrator | Friday 19 September 2025 12:03:11 +0000 (0:00:00.959) 0:00:02.406 ****** 2025-09-19 12:03:14.058679 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:14.058691 | orchestrator | 2025-09-19 12:03:14.058703 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-19 12:03:14.058714 | orchestrator | Friday 19 September 2025 12:03:11 +0000 (0:00:00.131) 0:00:02.538 ****** 2025-09-19 12:03:14.058725 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:03:14.058736 | orchestrator | 2025-09-19 12:03:14.058747 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-19 12:03:14.058758 | orchestrator | Friday 19 September 2025 12:03:12 +0000 (0:00:00.166) 0:00:02.704 ****** 2025-09-19 12:03:14.058769 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:03:14.058780 | orchestrator | skipping: [testbed-node-4] 2025-09-19 12:03:14.058815 | orchestrator | skipping: [testbed-node-5] 2025-09-19 12:03:14.058826 | orchestrator | 2025-09-19 12:03:14.058837 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-19 12:03:14.058851 | orchestrator | Friday 19 September 2025 12:03:12 +0000 (0:00:00.317) 0:00:03.022 ****** 2025-09-19 12:03:14.058864 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:14.058877 | orchestrator | 2025-09-19 12:03:14.058904 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-19 12:03:14.058918 | orchestrator | Friday 19 September 2025 12:03:12 +0000 (0:00:00.146) 0:00:03.168 ****** 2025-09-19 12:03:14.058930 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:14.058943 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:03:14.058955 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:03:14.058967 | orchestrator | 2025-09-19 12:03:14.058980 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-09-19 12:03:14.058993 | orchestrator | Friday 19 September 2025 12:03:12 +0000 (0:00:00.305) 0:00:03.473 ****** 2025-09-19 12:03:14.059006 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:14.059019 | orchestrator | 2025-09-19 12:03:14.059032 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 12:03:14.059045 | orchestrator | Friday 19 September 2025 12:03:13 +0000 (0:00:00.535) 0:00:04.009 ****** 2025-09-19 12:03:14.059057 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:14.059068 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:03:14.059078 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:03:14.059089 | orchestrator | 2025-09-19 12:03:14.059100 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-09-19 12:03:14.059111 | orchestrator | Friday 19 September 2025 12:03:13 +0000 (0:00:00.467) 0:00:04.476 ****** 2025-09-19 12:03:14.059125 | orchestrator | skipping: [testbed-node-3] => (item={'id': '76a353da4f53e6e245661081bc3c0c01709b7bf9cb1671fd6c43df0205ad7427', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 12:03:14.059139 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b8a8603c8a288178c4800f8a8b9a87be9d8aaf61bc6f5db0036d04467cc0b255', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 12:03:14.059151 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c395deaf2544f2551418386cab458de7e5528664dc6851f2c58f4b28433b083e', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 12:03:14.059164 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6df2ec8ba405b6c1765599bef0eb89741d822bac59b8a16b6ad1c253b9b92641', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 12:03:14.059190 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'be240137071abfb9f04ea3c710851b7ab74aa9fb94b5d9b3310d0af5395897b4', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-09-19 12:03:14.059220 | orchestrator | skipping: [testbed-node-3] => (item={'id': '331f0ac032bd16e6cd46649760124122064c13689b99f7856c5ca72820ae2032', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-09-19 12:03:14.059232 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3ac0ed800b7f6cf06f8fffd6334c8e39ebc10e241ae3aa2204a83c79f10e249b', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 12:03:14.059244 | orchestrator | skipping: [testbed-node-3] => (item={'id': '617995b3ea0b304af5e8a054b1a4dbc0ad7d16aa38da38dc2a8fa8f8dda23d3d', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 12:03:14.059255 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a61d4da38ffa4e5cb007be41390358180bf54f7bb13c2421c87c35a6ebdecd26', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-19 12:03:14.059271 | orchestrator | skipping: [testbed-node-3] => (item={'id': '743c65ddc01c82b91cb83dbbd8d0b34b36fd86a01458b387258685191a87e287', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-19 12:03:14.059282 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6d13a87525377e81a98fc4ab6b229f62f502c02589c24eda384927b3a8b537df', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-19 12:03:14.059299 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b471c681a6de8e1c4122c2e9eb06495af5e4aae559dc9dd755ff9c850fd27eab', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-19 12:03:14.059311 | orchestrator | ok: [testbed-node-3] => (item={'id': '4aa9905adff4b962b410da02471c47ca45dc025e8467cf2de17234c099f595ab', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 12:03:14.059322 | orchestrator | ok: [testbed-node-3] => (item={'id': '88b63a3fceccad3c0926c9ffccecf4797bb1e6c2b5981ed0847dfb32212689e9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 12:03:14.059334 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7bd141d00727fc478ffb5a26ca6b6edccd3c436e606fb638250d839723d5f791', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-09-19 12:03:14.059345 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ebd795e983ed95f40f3eb3ce048a2dd76c7e101076d138f26cbefee6e53fdaae', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-19 12:03:14.059357 | orchestrator | skipping: [testbed-node-3] => (item={'id': '21ebfe720b1158e089f03899abab9a9645ee46b925fccdb158163546b650a307', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-19 12:03:14.059369 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3da5fc3b06c221230ed91b8e5cb481ab9bff5d2b82ba8deefcc0454b8f71b18c', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 12:03:14.059387 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8c20042a6f28591258674ea4beeabc1af074af5680c34a47e0ec275bb5f4e215', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 12:03:14.059399 | orchestrator | skipping: [testbed-node-3] => (item={'id': '27fe1959e824133f4885bbb01bb3ebc5683a1ae6de769cfe3bb1810264375989', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 12:03:14.059411 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c219fbbd1970d1d088876e53ce0457c51d3f63875bfbbcdda80a16bd8e9e1748', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 12:03:14.059428 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6d92c34d4a3fe714c4e69f7f55b9ec154059ea4d5cd29d064a965e37c0a6c3ad', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 12:03:14.218932 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd31f06f739ce989f5fa4ff9118f3a2f1771e1c24c4b9e187b3ec162116f59816', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 12:03:14.219032 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fff65c106799152e0140b21030242fcc7b9ca53fc47784d25b1e2eba8afabac1', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 12:03:14.219048 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0f062e76c4dbdf2b4d18126ec1f3a0a07185fab4d7815bbf53620255627a68fc', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-09-19 12:03:14.219061 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'eb9dbaf21e6ca83b43236774cc0bb807c6b11f2cd08a96f9eeabbd0d1cabfdcc', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-09-19 12:03:14.219073 | orchestrator | skipping: [testbed-node-4] => (item={'id': '156f87a0aee1eeffacc696486cddb9301c2f066e1d8731011f3d56fd909de516', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 12:03:14.219085 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bc3275571305523d7f2a05167fd174674189ae8f2b8453b971f9b6518fc27797', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 12:03:14.219096 | orchestrator | skipping: [testbed-node-4] => (item={'id': '223169c921336a322f37f15006451a49b3324c455e8272a66187a637ba16b2fe', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-19 12:03:14.219127 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cbc8cad08d5cd57701ea7ae326f250b4d30d0a6262f76278304f7d81c100502c', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-19 12:03:14.219139 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ac46157f9a99fa194d6ac09a170f6e89a1646f3224c4074b97f59ed6474911fe', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-19 12:03:14.219151 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1a85d5bf5f554946dd317e36a286e6dd55e8872b9a540d089a97e7c9fa28ae33', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-19 12:03:14.219185 | orchestrator | ok: [testbed-node-4] => (item={'id': '6df0487e706cef95fd285126102e43b5efebbe8c140f92c2a336249c6d4cf6f3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 12:03:14.219197 | orchestrator | ok: [testbed-node-4] => (item={'id': '2b7b950700c358b06c29e0cbeb11e589da7ed5ec13c6ff0e4b74f2578b5e0c0a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 12:03:14.219208 | orchestrator | skipping: [testbed-node-4] => (item={'id': '951d999f64c67235d0bf3d3b2fb7428524bfabcdb2a6f0fdd0e57881bd31472d', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-09-19 12:03:14.219220 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fa8b1775876592713c2156c80489c832beb089c36c9dd7159d777ddb977274eb', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-19 12:03:14.219232 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9cd9deb6a5a7e6ca4019d219005cb4067dc20e4434f11ae862a9a2d91db2bd82', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-19 12:03:14.219260 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f98c32829e4e083df293d132e23693e7a43265d31bbf58a047a3617d291172be', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 12:03:14.219272 | orchestrator | skipping: [testbed-node-4] => (item={'id': '54820d784bc7e8bdfb778947cd0818c4f298f5a8672a3672f4351223b136e63b', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 12:03:14.219284 | orchestrator | skipping: [testbed-node-4] => (item={'id': '649cb68c85f61c8b90c2a076b253efaef38de8ca50e5af8d975435d273c6c79e', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 12:03:14.219295 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0bf6e059e31c7664b4849cfc7f1f5cfad05863bf2e474c6bea6d05724dc1d96d', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 12:03:14.219307 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7061a16cf04eba9f970266a45cc603046dd220451295b1a7910749b3d81b38f0', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 12:03:14.219319 | orchestrator | skipping: [testbed-node-5] => (item={'id': '95588a465f9393f891768defb0dea7ca55be90b85e2d09d1621776daffd69d99', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 12:03:14.219336 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dda5af7f9946ca68da1e63356f6192b5dcc6d23535b928902fbaef9441479c3e', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 12:03:14.219348 | orchestrator | skipping: [testbed-node-5] => (item={'id': '94db86bf1425072833da7998343e3a8ac51cd91a3b97da04b4cce352c03c2f9d', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-09-19 12:03:14.219359 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2b49480b9e59a0f451a9cfe04bd09184dafef0a04a65efee06254eee8a68fdf4', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-09-19 12:03:14.219379 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e313479b2e493c66f8399cbbe7a5d845b063f700c0b7f565b69ad9c0bd4113a7', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 12:03:14.219390 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9a3118d47d2a6ede9b5076be2759540be1412eeff457bb1265bac0a1943c001a', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 12:03:14.219401 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f6cd1dd510416108bb4b7c230e54688b6791b00754fc67518e5084b71e7e51c3', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-19 12:03:14.219413 | orchestrator | skipping: [testbed-node-5] => (item={'id': '238002cb5912a6acb9d3f5b9318d2321d66338c2de5b8b32fb1adc9b07ea5e1b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-19 12:03:14.219424 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e7d9c788ef7cd9dcf364de42fa2b55028b119f38a026137ad5f626bd467aa843', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-19 12:03:14.219438 | orchestrator | skipping: [testbed-node-5] => (item={'id': '85e565d2b6055484d824e21381001d46388f6a137fb8eb5d25dcd7304bf5eba8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-19 12:03:14.219458 | orchestrator | ok: [testbed-node-5] => (item={'id': '36c5cbf0cfbdeac0745ae88ef02468b33c29e40ebe82a5950027f19c14ed8b37', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 12:03:22.404592 | orchestrator | ok: [testbed-node-5] => (item={'id': '91a0aa65d121ef2098b25425cb505b42c81f1e84b223aba36d7f6719e832f3d5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 12:03:22.404705 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8e249383c36208f4ec7b6411ddfae8f1b7eec94a43ca28a2e824a1c6999cf2ba', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-09-19 12:03:22.404721 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5f98f86122d8d33638ceaf8d8395bfce9f7a463d68ea540b9485268fb300fdf9', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-19 12:03:22.404735 | orchestrator | skipping: [testbed-node-5] => (item={'id': '661cd06cad1f5836cc42a2b202ffc0e8a65816db2520df76e8c166640076e2c5', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-19 12:03:22.404748 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a52521610777e9352cfc00ed484715cc9467f8d62cc50c30716226f615e1893e', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 12:03:22.404776 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b288798a0ec528970a6ed6ed539d6357e49576c8cf420c2951056a815817954c', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 12:03:22.404789 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a2dba9352e6c8f71bcc142ce5e0fef6e506b934f1bfa5515786e4e101a0fbb73', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 12:03:22.404819 | orchestrator | 2025-09-19 12:03:22.404883 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-09-19 12:03:22.404896 | orchestrator | Friday 19 September 2025 12:03:14 +0000 (0:00:00.497) 0:00:04.974 ****** 2025-09-19 12:03:22.404907 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:22.404919 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:03:22.404930 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:03:22.404941 | orchestrator | 2025-09-19 12:03:22.404952 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-09-19 12:03:22.404963 | orchestrator | Friday 19 September 2025 12:03:14 +0000 (0:00:00.311) 0:00:05.286 ****** 2025-09-19 12:03:22.404974 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:03:22.404986 | orchestrator | skipping: [testbed-node-4] 2025-09-19 12:03:22.404996 | orchestrator | skipping: [testbed-node-5] 2025-09-19 12:03:22.405007 | orchestrator | 2025-09-19 12:03:22.405018 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-09-19 12:03:22.405029 | orchestrator | Friday 19 September 2025 12:03:14 +0000 (0:00:00.295) 0:00:05.581 ****** 2025-09-19 12:03:22.405040 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:22.405051 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:03:22.405062 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:03:22.405073 | orchestrator | 2025-09-19 12:03:22.405084 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 12:03:22.405095 | orchestrator | Friday 19 September 2025 12:03:15 +0000 (0:00:00.513) 0:00:06.094 ****** 2025-09-19 12:03:22.405106 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:22.405117 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:03:22.405129 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:03:22.405141 | orchestrator | 2025-09-19 12:03:22.405153 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-09-19 12:03:22.405165 | orchestrator | Friday 19 September 2025 12:03:15 +0000 (0:00:00.274) 0:00:06.369 ****** 2025-09-19 12:03:22.405178 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-09-19 12:03:22.405192 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-09-19 12:03:22.405204 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:03:22.405217 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-09-19 12:03:22.405230 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-09-19 12:03:22.405242 | orchestrator | skipping: [testbed-node-4] 2025-09-19 12:03:22.405255 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-09-19 12:03:22.405268 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-09-19 12:03:22.405281 | orchestrator | skipping: [testbed-node-5] 2025-09-19 12:03:22.405293 | orchestrator | 2025-09-19 12:03:22.405306 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-09-19 12:03:22.405318 | orchestrator | Friday 19 September 2025 12:03:16 +0000 (0:00:00.358) 0:00:06.728 ****** 2025-09-19 12:03:22.405331 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:22.405343 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:03:22.405355 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:03:22.405367 | orchestrator | 2025-09-19 12:03:22.405399 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-19 12:03:22.405412 | orchestrator | Friday 19 September 2025 12:03:16 +0000 (0:00:00.301) 0:00:07.029 ****** 2025-09-19 12:03:22.405425 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:03:22.405437 | orchestrator | skipping: [testbed-node-4] 2025-09-19 12:03:22.405449 | orchestrator | skipping: [testbed-node-5] 2025-09-19 12:03:22.405462 | orchestrator | 2025-09-19 12:03:22.405475 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-19 12:03:22.405507 | orchestrator | Friday 19 September 2025 12:03:16 +0000 (0:00:00.464) 0:00:07.494 ****** 2025-09-19 12:03:22.405518 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:03:22.405528 | orchestrator | skipping: [testbed-node-4] 2025-09-19 12:03:22.405539 | orchestrator | skipping: [testbed-node-5] 2025-09-19 12:03:22.405550 | orchestrator | 2025-09-19 12:03:22.405561 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-09-19 12:03:22.405573 | orchestrator | Friday 19 September 2025 12:03:17 +0000 (0:00:00.310) 0:00:07.805 ****** 2025-09-19 12:03:22.405584 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:22.405594 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:03:22.405620 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:03:22.405631 | orchestrator | 2025-09-19 12:03:22.405654 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 12:03:22.405665 | orchestrator | Friday 19 September 2025 12:03:17 +0000 (0:00:00.292) 0:00:08.098 ****** 2025-09-19 12:03:22.405676 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:03:22.405688 | orchestrator | 2025-09-19 12:03:22.405699 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 12:03:22.405710 | orchestrator | Friday 19 September 2025 12:03:17 +0000 (0:00:00.259) 0:00:08.358 ****** 2025-09-19 12:03:22.405721 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:03:22.405733 | orchestrator | 2025-09-19 12:03:22.405744 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 12:03:22.405755 | orchestrator | Friday 19 September 2025 12:03:17 +0000 (0:00:00.249) 0:00:08.607 ****** 2025-09-19 12:03:22.405766 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:03:22.405777 | orchestrator | 2025-09-19 12:03:22.405788 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:03:22.405799 | orchestrator | Friday 19 September 2025 12:03:18 +0000 (0:00:00.250) 0:00:08.858 ****** 2025-09-19 12:03:22.405810 | orchestrator | 2025-09-19 12:03:22.405821 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:03:22.405862 | orchestrator | Friday 19 September 2025 12:03:18 +0000 (0:00:00.063) 0:00:08.922 ****** 2025-09-19 12:03:22.405873 | orchestrator | 2025-09-19 12:03:22.405884 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:03:22.405895 | orchestrator | Friday 19 September 2025 12:03:18 +0000 (0:00:00.065) 0:00:08.987 ****** 2025-09-19 12:03:22.405906 | orchestrator | 2025-09-19 12:03:22.405917 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 12:03:22.405928 | orchestrator | Friday 19 September 2025 12:03:18 +0000 (0:00:00.234) 0:00:09.222 ****** 2025-09-19 12:03:22.405939 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:03:22.405951 | orchestrator | 2025-09-19 12:03:22.405962 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-09-19 12:03:22.405973 | orchestrator | Friday 19 September 2025 12:03:18 +0000 (0:00:00.256) 0:00:09.478 ****** 2025-09-19 12:03:22.405984 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:03:22.405995 | orchestrator | 2025-09-19 12:03:22.406006 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 12:03:22.406073 | orchestrator | Friday 19 September 2025 12:03:19 +0000 (0:00:00.248) 0:00:09.727 ****** 2025-09-19 12:03:22.406085 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:22.406096 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:03:22.406107 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:03:22.406117 | orchestrator | 2025-09-19 12:03:22.406128 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-09-19 12:03:22.406139 | orchestrator | Friday 19 September 2025 12:03:19 +0000 (0:00:00.297) 0:00:10.025 ****** 2025-09-19 12:03:22.406150 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:22.406161 | orchestrator | 2025-09-19 12:03:22.406172 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-09-19 12:03:22.406182 | orchestrator | Friday 19 September 2025 12:03:19 +0000 (0:00:00.236) 0:00:10.261 ****** 2025-09-19 12:03:22.406201 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 12:03:22.406212 | orchestrator | 2025-09-19 12:03:22.406223 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-09-19 12:03:22.406234 | orchestrator | Friday 19 September 2025 12:03:21 +0000 (0:00:01.600) 0:00:11.862 ****** 2025-09-19 12:03:22.406244 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:22.406255 | orchestrator | 2025-09-19 12:03:22.406266 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-09-19 12:03:22.406277 | orchestrator | Friday 19 September 2025 12:03:21 +0000 (0:00:00.145) 0:00:12.008 ****** 2025-09-19 12:03:22.406288 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:22.406298 | orchestrator | 2025-09-19 12:03:22.406309 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-09-19 12:03:22.406368 | orchestrator | Friday 19 September 2025 12:03:21 +0000 (0:00:00.308) 0:00:12.317 ****** 2025-09-19 12:03:22.406381 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:03:22.406392 | orchestrator | 2025-09-19 12:03:22.406403 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-09-19 12:03:22.406413 | orchestrator | Friday 19 September 2025 12:03:21 +0000 (0:00:00.125) 0:00:12.442 ****** 2025-09-19 12:03:22.406424 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:22.406435 | orchestrator | 2025-09-19 12:03:22.406446 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 12:03:22.406457 | orchestrator | Friday 19 September 2025 12:03:21 +0000 (0:00:00.121) 0:00:12.564 ****** 2025-09-19 12:03:22.406467 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:22.406478 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:03:22.406489 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:03:22.406499 | orchestrator | 2025-09-19 12:03:22.406510 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-09-19 12:03:22.406529 | orchestrator | Friday 19 September 2025 12:03:22 +0000 (0:00:00.505) 0:00:13.070 ****** 2025-09-19 12:03:34.878437 | orchestrator | changed: [testbed-node-3] 2025-09-19 12:03:34.878554 | orchestrator | changed: [testbed-node-4] 2025-09-19 12:03:34.878570 | orchestrator | changed: [testbed-node-5] 2025-09-19 12:03:34.878583 | orchestrator | 2025-09-19 12:03:34.878596 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-09-19 12:03:34.878609 | orchestrator | Friday 19 September 2025 12:03:24 +0000 (0:00:02.535) 0:00:15.605 ****** 2025-09-19 12:03:34.878620 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:34.878632 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:03:34.878643 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:03:34.878654 | orchestrator | 2025-09-19 12:03:34.878665 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-09-19 12:03:34.878677 | orchestrator | Friday 19 September 2025 12:03:25 +0000 (0:00:00.328) 0:00:15.933 ****** 2025-09-19 12:03:34.878688 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:34.878699 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:03:34.878710 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:03:34.878721 | orchestrator | 2025-09-19 12:03:34.878732 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-09-19 12:03:34.878744 | orchestrator | Friday 19 September 2025 12:03:25 +0000 (0:00:00.492) 0:00:16.426 ****** 2025-09-19 12:03:34.878755 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:03:34.878766 | orchestrator | skipping: [testbed-node-4] 2025-09-19 12:03:34.878777 | orchestrator | skipping: [testbed-node-5] 2025-09-19 12:03:34.878789 | orchestrator | 2025-09-19 12:03:34.878801 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-09-19 12:03:34.878813 | orchestrator | Friday 19 September 2025 12:03:26 +0000 (0:00:00.482) 0:00:16.909 ****** 2025-09-19 12:03:34.878824 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:34.878835 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:03:34.878846 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:03:34.878857 | orchestrator | 2025-09-19 12:03:34.878869 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-09-19 12:03:34.878941 | orchestrator | Friday 19 September 2025 12:03:26 +0000 (0:00:00.291) 0:00:17.200 ****** 2025-09-19 12:03:34.878955 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:03:34.878967 | orchestrator | skipping: [testbed-node-4] 2025-09-19 12:03:34.878980 | orchestrator | skipping: [testbed-node-5] 2025-09-19 12:03:34.878992 | orchestrator | 2025-09-19 12:03:34.879020 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-09-19 12:03:34.879033 | orchestrator | Friday 19 September 2025 12:03:26 +0000 (0:00:00.277) 0:00:17.478 ****** 2025-09-19 12:03:34.879046 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:03:34.879058 | orchestrator | skipping: [testbed-node-4] 2025-09-19 12:03:34.879069 | orchestrator | skipping: [testbed-node-5] 2025-09-19 12:03:34.879080 | orchestrator | 2025-09-19 12:03:34.879091 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 12:03:34.879102 | orchestrator | Friday 19 September 2025 12:03:27 +0000 (0:00:00.292) 0:00:17.771 ****** 2025-09-19 12:03:34.879113 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:34.879124 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:03:34.879134 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:03:34.879146 | orchestrator | 2025-09-19 12:03:34.879156 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-09-19 12:03:34.879168 | orchestrator | Friday 19 September 2025 12:03:27 +0000 (0:00:00.712) 0:00:18.484 ****** 2025-09-19 12:03:34.879178 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:34.879189 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:03:34.879200 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:03:34.879211 | orchestrator | 2025-09-19 12:03:34.879222 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-09-19 12:03:34.879233 | orchestrator | Friday 19 September 2025 12:03:28 +0000 (0:00:00.508) 0:00:18.992 ****** 2025-09-19 12:03:34.879244 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:34.879255 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:03:34.879265 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:03:34.879276 | orchestrator | 2025-09-19 12:03:34.879287 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-09-19 12:03:34.879299 | orchestrator | Friday 19 September 2025 12:03:28 +0000 (0:00:00.275) 0:00:19.267 ****** 2025-09-19 12:03:34.879310 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:03:34.879321 | orchestrator | skipping: [testbed-node-4] 2025-09-19 12:03:34.879331 | orchestrator | skipping: [testbed-node-5] 2025-09-19 12:03:34.879342 | orchestrator | 2025-09-19 12:03:34.879353 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-09-19 12:03:34.879364 | orchestrator | Friday 19 September 2025 12:03:28 +0000 (0:00:00.291) 0:00:19.559 ****** 2025-09-19 12:03:34.879375 | orchestrator | ok: [testbed-node-3] 2025-09-19 12:03:34.879386 | orchestrator | ok: [testbed-node-4] 2025-09-19 12:03:34.879397 | orchestrator | ok: [testbed-node-5] 2025-09-19 12:03:34.879408 | orchestrator | 2025-09-19 12:03:34.879419 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-19 12:03:34.879430 | orchestrator | Friday 19 September 2025 12:03:29 +0000 (0:00:00.495) 0:00:20.055 ****** 2025-09-19 12:03:34.879441 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 12:03:34.879452 | orchestrator | 2025-09-19 12:03:34.879463 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-19 12:03:34.879474 | orchestrator | Friday 19 September 2025 12:03:29 +0000 (0:00:00.254) 0:00:20.309 ****** 2025-09-19 12:03:34.879485 | orchestrator | skipping: [testbed-node-3] 2025-09-19 12:03:34.879496 | orchestrator | 2025-09-19 12:03:34.879507 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 12:03:34.879518 | orchestrator | Friday 19 September 2025 12:03:29 +0000 (0:00:00.243) 0:00:20.552 ****** 2025-09-19 12:03:34.879529 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 12:03:34.879540 | orchestrator | 2025-09-19 12:03:34.879551 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 12:03:34.879571 | orchestrator | Friday 19 September 2025 12:03:31 +0000 (0:00:01.661) 0:00:22.214 ****** 2025-09-19 12:03:34.879589 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 12:03:34.879607 | orchestrator | 2025-09-19 12:03:34.879628 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 12:03:34.879655 | orchestrator | Friday 19 September 2025 12:03:31 +0000 (0:00:00.263) 0:00:22.478 ****** 2025-09-19 12:03:34.879693 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 12:03:34.879712 | orchestrator | 2025-09-19 12:03:34.879728 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:03:34.879746 | orchestrator | Friday 19 September 2025 12:03:32 +0000 (0:00:00.254) 0:00:22.732 ****** 2025-09-19 12:03:34.879763 | orchestrator | 2025-09-19 12:03:34.879781 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:03:34.879799 | orchestrator | Friday 19 September 2025 12:03:32 +0000 (0:00:00.068) 0:00:22.801 ****** 2025-09-19 12:03:34.879818 | orchestrator | 2025-09-19 12:03:34.879837 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 12:03:34.879855 | orchestrator | Friday 19 September 2025 12:03:32 +0000 (0:00:00.064) 0:00:22.866 ****** 2025-09-19 12:03:34.879873 | orchestrator | 2025-09-19 12:03:34.879921 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-19 12:03:34.879939 | orchestrator | Friday 19 September 2025 12:03:32 +0000 (0:00:00.069) 0:00:22.936 ****** 2025-09-19 12:03:34.879958 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 12:03:34.879977 | orchestrator | 2025-09-19 12:03:34.879995 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 12:03:34.880013 | orchestrator | Friday 19 September 2025 12:03:33 +0000 (0:00:01.474) 0:00:24.410 ****** 2025-09-19 12:03:34.880032 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-09-19 12:03:34.880051 | orchestrator |  "msg": [ 2025-09-19 12:03:34.880070 | orchestrator |  "Validator run completed.", 2025-09-19 12:03:34.880089 | orchestrator |  "You can find the report file here:", 2025-09-19 12:03:34.880108 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-09-19T12:03:10+00:00-report.json", 2025-09-19 12:03:34.880128 | orchestrator |  "on the following host:", 2025-09-19 12:03:34.880146 | orchestrator |  "testbed-manager" 2025-09-19 12:03:34.880164 | orchestrator |  ] 2025-09-19 12:03:34.880183 | orchestrator | } 2025-09-19 12:03:34.880202 | orchestrator | 2025-09-19 12:03:34.880219 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 12:03:34.880248 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-09-19 12:03:34.880269 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 12:03:34.880288 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 12:03:34.880307 | orchestrator | 2025-09-19 12:03:34.880325 | orchestrator | 2025-09-19 12:03:34.880345 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 12:03:34.880364 | orchestrator | Friday 19 September 2025 12:03:34 +0000 (0:00:00.827) 0:00:25.238 ****** 2025-09-19 12:03:34.880382 | orchestrator | =============================================================================== 2025-09-19 12:03:34.880400 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.54s 2025-09-19 12:03:34.880418 | orchestrator | Aggregate test results step one ----------------------------------------- 1.66s 2025-09-19 12:03:34.880437 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.60s 2025-09-19 12:03:34.880469 | orchestrator | Write report file ------------------------------------------------------- 1.47s 2025-09-19 12:03:34.880487 | orchestrator | Create report output directory ------------------------------------------ 0.96s 2025-09-19 12:03:34.880505 | orchestrator | Print report file information ------------------------------------------- 0.83s 2025-09-19 12:03:34.880524 | orchestrator | Get timestamp for report file ------------------------------------------- 0.72s 2025-09-19 12:03:34.880543 | orchestrator | Prepare test data ------------------------------------------------------- 0.71s 2025-09-19 12:03:34.880560 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.54s 2025-09-19 12:03:34.880578 | orchestrator | Set test result to passed if count matches ------------------------------ 0.51s 2025-09-19 12:03:34.880596 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.51s 2025-09-19 12:03:34.880614 | orchestrator | Prepare test data ------------------------------------------------------- 0.51s 2025-09-19 12:03:34.880633 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.50s 2025-09-19 12:03:34.880651 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.50s 2025-09-19 12:03:34.880669 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.49s 2025-09-19 12:03:34.880681 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.48s 2025-09-19 12:03:34.880692 | orchestrator | Prepare test data ------------------------------------------------------- 0.47s 2025-09-19 12:03:34.880702 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.46s 2025-09-19 12:03:34.880713 | orchestrator | Flush handlers ---------------------------------------------------------- 0.36s 2025-09-19 12:03:34.880724 | orchestrator | Get list of ceph-osd containers that are not running -------------------- 0.36s 2025-09-19 12:03:35.191130 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-09-19 12:03:35.199712 | orchestrator | + set -e 2025-09-19 12:03:35.199779 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 12:03:35.199802 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 12:03:35.199821 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 12:03:35.199840 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 12:03:35.199859 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 12:03:35.199873 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 12:03:35.199916 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 12:03:35.199929 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 12:03:35.199940 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 12:03:35.199951 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 12:03:35.199962 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 12:03:35.199973 | orchestrator | ++ export ARA=false 2025-09-19 12:03:35.199984 | orchestrator | ++ ARA=false 2025-09-19 12:03:35.199995 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 12:03:35.200006 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 12:03:35.200016 | orchestrator | ++ export TEMPEST=false 2025-09-19 12:03:35.200027 | orchestrator | ++ TEMPEST=false 2025-09-19 12:03:35.200037 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 12:03:35.200048 | orchestrator | ++ IS_ZUUL=true 2025-09-19 12:03:35.200059 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.121 2025-09-19 12:03:35.200071 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.121 2025-09-19 12:03:35.200081 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 12:03:35.200092 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 12:03:35.200102 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 12:03:35.200113 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 12:03:35.200124 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 12:03:35.200134 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 12:03:35.200145 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 12:03:35.200155 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 12:03:35.200166 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-19 12:03:35.200177 | orchestrator | + source /etc/os-release 2025-09-19 12:03:35.200188 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-09-19 12:03:35.200433 | orchestrator | ++ NAME=Ubuntu 2025-09-19 12:03:35.200524 | orchestrator | ++ VERSION_ID=24.04 2025-09-19 12:03:35.200538 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-09-19 12:03:35.200549 | orchestrator | ++ VERSION_CODENAME=noble 2025-09-19 12:03:35.200559 | orchestrator | ++ ID=ubuntu 2025-09-19 12:03:35.200569 | orchestrator | ++ ID_LIKE=debian 2025-09-19 12:03:35.200604 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-09-19 12:03:35.200614 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-09-19 12:03:35.200624 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-09-19 12:03:35.200634 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-09-19 12:03:35.200644 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-09-19 12:03:35.200655 | orchestrator | ++ LOGO=ubuntu-logo 2025-09-19 12:03:35.200664 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-09-19 12:03:35.200674 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-09-19 12:03:35.200685 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-19 12:03:35.229592 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-19 12:03:58.478286 | orchestrator | 2025-09-19 12:03:58.478393 | orchestrator | # Status of Elasticsearch 2025-09-19 12:03:58.478410 | orchestrator | 2025-09-19 12:03:58.478420 | orchestrator | + pushd /opt/configuration/contrib 2025-09-19 12:03:58.478432 | orchestrator | + echo 2025-09-19 12:03:58.478458 | orchestrator | + echo '# Status of Elasticsearch' 2025-09-19 12:03:58.478468 | orchestrator | + echo 2025-09-19 12:03:58.478478 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-09-19 12:03:58.701385 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-09-19 12:03:58.701478 | orchestrator | 2025-09-19 12:03:58.701492 | orchestrator | # Status of MariaDB 2025-09-19 12:03:58.701505 | orchestrator | 2025-09-19 12:03:58.701517 | orchestrator | + echo 2025-09-19 12:03:58.701529 | orchestrator | + echo '# Status of MariaDB' 2025-09-19 12:03:58.701540 | orchestrator | + echo 2025-09-19 12:03:58.701551 | orchestrator | + MARIADB_USER=root_shard_0 2025-09-19 12:03:58.701563 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-09-19 12:03:58.778775 | orchestrator | Reading package lists... 2025-09-19 12:03:59.118263 | orchestrator | Building dependency tree... 2025-09-19 12:03:59.118655 | orchestrator | Reading state information... 2025-09-19 12:03:59.499143 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-09-19 12:03:59.499243 | orchestrator | bc set to manually installed. 2025-09-19 12:03:59.499257 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-09-19 12:04:00.210478 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-09-19 12:04:00.211059 | orchestrator | 2025-09-19 12:04:00.211086 | orchestrator | # Status of Prometheus 2025-09-19 12:04:00.211097 | orchestrator | + echo 2025-09-19 12:04:00.211107 | orchestrator | + echo '# Status of Prometheus' 2025-09-19 12:04:00.211116 | orchestrator | 2025-09-19 12:04:00.211125 | orchestrator | + echo 2025-09-19 12:04:00.211134 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-09-19 12:04:00.263858 | orchestrator | Unauthorized 2025-09-19 12:04:00.266812 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-09-19 12:04:00.330137 | orchestrator | Unauthorized 2025-09-19 12:04:00.333233 | orchestrator | 2025-09-19 12:04:00.333266 | orchestrator | # Status of RabbitMQ 2025-09-19 12:04:00.333279 | orchestrator | 2025-09-19 12:04:00.333291 | orchestrator | + echo 2025-09-19 12:04:00.333302 | orchestrator | + echo '# Status of RabbitMQ' 2025-09-19 12:04:00.333314 | orchestrator | + echo 2025-09-19 12:04:00.333326 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-09-19 12:04:00.819562 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-09-19 12:04:00.828827 | orchestrator | 2025-09-19 12:04:00.828904 | orchestrator | # Status of Redis 2025-09-19 12:04:00.828918 | orchestrator | 2025-09-19 12:04:00.828929 | orchestrator | + echo 2025-09-19 12:04:00.828940 | orchestrator | + echo '# Status of Redis' 2025-09-19 12:04:00.828952 | orchestrator | + echo 2025-09-19 12:04:00.828963 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-09-19 12:04:00.837573 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001854s;;;0.000000;10.000000 2025-09-19 12:04:00.837667 | orchestrator | 2025-09-19 12:04:00.837684 | orchestrator | # Create backup of MariaDB database 2025-09-19 12:04:00.837696 | orchestrator | 2025-09-19 12:04:00.837708 | orchestrator | + popd 2025-09-19 12:04:00.837719 | orchestrator | + echo 2025-09-19 12:04:00.837731 | orchestrator | + echo '# Create backup of MariaDB database' 2025-09-19 12:04:00.837742 | orchestrator | + echo 2025-09-19 12:04:00.837753 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-09-19 12:04:02.777607 | orchestrator | 2025-09-19 12:04:02 | INFO  | Task 7bd30839-d754-478c-95a2-80b2e7b42b94 (mariadb_backup) was prepared for execution. 2025-09-19 12:04:02.777727 | orchestrator | 2025-09-19 12:04:02 | INFO  | It takes a moment until task 7bd30839-d754-478c-95a2-80b2e7b42b94 (mariadb_backup) has been started and output is visible here. 2025-09-19 12:04:30.658076 | orchestrator | 2025-09-19 12:04:30.658200 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 12:04:30.658217 | orchestrator | 2025-09-19 12:04:30.658228 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 12:04:30.658239 | orchestrator | Friday 19 September 2025 12:04:06 +0000 (0:00:00.197) 0:00:00.197 ****** 2025-09-19 12:04:30.658249 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:04:30.658260 | orchestrator | ok: [testbed-node-1] 2025-09-19 12:04:30.658270 | orchestrator | ok: [testbed-node-2] 2025-09-19 12:04:30.658279 | orchestrator | 2025-09-19 12:04:30.658289 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 12:04:30.658299 | orchestrator | Friday 19 September 2025 12:04:06 +0000 (0:00:00.330) 0:00:00.527 ****** 2025-09-19 12:04:30.658309 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-19 12:04:30.658319 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-19 12:04:30.658329 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-19 12:04:30.658338 | orchestrator | 2025-09-19 12:04:30.658348 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-19 12:04:30.658357 | orchestrator | 2025-09-19 12:04:30.658367 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-19 12:04:30.658376 | orchestrator | Friday 19 September 2025 12:04:07 +0000 (0:00:00.538) 0:00:01.065 ****** 2025-09-19 12:04:30.658387 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 12:04:30.658397 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-19 12:04:30.658407 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-19 12:04:30.658416 | orchestrator | 2025-09-19 12:04:30.658426 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 12:04:30.658436 | orchestrator | Friday 19 September 2025 12:04:07 +0000 (0:00:00.395) 0:00:01.461 ****** 2025-09-19 12:04:30.658446 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 12:04:30.658456 | orchestrator | 2025-09-19 12:04:30.658466 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-09-19 12:04:30.658476 | orchestrator | Friday 19 September 2025 12:04:08 +0000 (0:00:00.580) 0:00:02.041 ****** 2025-09-19 12:04:30.658485 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:04:30.658495 | orchestrator | ok: [testbed-node-2] 2025-09-19 12:04:30.658505 | orchestrator | ok: [testbed-node-1] 2025-09-19 12:04:30.658514 | orchestrator | 2025-09-19 12:04:30.658524 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-09-19 12:04:30.658534 | orchestrator | Friday 19 September 2025 12:04:11 +0000 (0:00:03.215) 0:00:05.256 ****** 2025-09-19 12:04:30.658544 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-19 12:04:30.658553 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-09-19 12:04:30.658564 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-19 12:04:30.658574 | orchestrator | mariadb_bootstrap_restart 2025-09-19 12:04:30.658620 | orchestrator | skipping: [testbed-node-1] 2025-09-19 12:04:30.658632 | orchestrator | skipping: [testbed-node-2] 2025-09-19 12:04:30.658644 | orchestrator | changed: [testbed-node-0] 2025-09-19 12:04:30.658655 | orchestrator | 2025-09-19 12:04:30.658666 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-19 12:04:30.658677 | orchestrator | skipping: no hosts matched 2025-09-19 12:04:30.658687 | orchestrator | 2025-09-19 12:04:30.658699 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-19 12:04:30.658709 | orchestrator | skipping: no hosts matched 2025-09-19 12:04:30.658720 | orchestrator | 2025-09-19 12:04:30.658731 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-19 12:04:30.658743 | orchestrator | skipping: no hosts matched 2025-09-19 12:04:30.658754 | orchestrator | 2025-09-19 12:04:30.658765 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-19 12:04:30.658774 | orchestrator | 2025-09-19 12:04:30.658799 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-19 12:04:30.658810 | orchestrator | Friday 19 September 2025 12:04:29 +0000 (0:00:17.857) 0:00:23.114 ****** 2025-09-19 12:04:30.658819 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:04:30.658829 | orchestrator | skipping: [testbed-node-1] 2025-09-19 12:04:30.658838 | orchestrator | skipping: [testbed-node-2] 2025-09-19 12:04:30.658848 | orchestrator | 2025-09-19 12:04:30.658858 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-19 12:04:30.658867 | orchestrator | Friday 19 September 2025 12:04:29 +0000 (0:00:00.293) 0:00:23.408 ****** 2025-09-19 12:04:30.658877 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:04:30.658886 | orchestrator | skipping: [testbed-node-1] 2025-09-19 12:04:30.658896 | orchestrator | skipping: [testbed-node-2] 2025-09-19 12:04:30.658905 | orchestrator | 2025-09-19 12:04:30.658914 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 12:04:30.658925 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 12:04:30.658936 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 12:04:30.658946 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 12:04:30.658955 | orchestrator | 2025-09-19 12:04:30.658965 | orchestrator | 2025-09-19 12:04:30.658975 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 12:04:30.658984 | orchestrator | Friday 19 September 2025 12:04:30 +0000 (0:00:00.403) 0:00:23.812 ****** 2025-09-19 12:04:30.658994 | orchestrator | =============================================================================== 2025-09-19 12:04:30.659003 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.86s 2025-09-19 12:04:30.659029 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.22s 2025-09-19 12:04:30.659040 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.58s 2025-09-19 12:04:30.659050 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2025-09-19 12:04:30.659059 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.40s 2025-09-19 12:04:30.659069 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.40s 2025-09-19 12:04:30.659078 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-09-19 12:04:30.659088 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.29s 2025-09-19 12:04:30.979090 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-09-19 12:04:30.988324 | orchestrator | + set -e 2025-09-19 12:04:30.988399 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 12:04:30.988413 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 12:04:30.988451 | orchestrator | ++ INTERACTIVE=false 2025-09-19 12:04:30.988463 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 12:04:30.988474 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 12:04:30.988485 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-19 12:04:30.989354 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-19 12:04:30.992231 | orchestrator | 2025-09-19 12:04:30.992259 | orchestrator | # OpenStack endpoints 2025-09-19 12:04:30.992271 | orchestrator | 2025-09-19 12:04:30.992282 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 12:04:30.992294 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 12:04:30.992305 | orchestrator | + export OS_CLOUD=admin 2025-09-19 12:04:30.992316 | orchestrator | + OS_CLOUD=admin 2025-09-19 12:04:30.992327 | orchestrator | + echo 2025-09-19 12:04:30.992339 | orchestrator | + echo '# OpenStack endpoints' 2025-09-19 12:04:30.992350 | orchestrator | + echo 2025-09-19 12:04:30.992361 | orchestrator | + openstack endpoint list 2025-09-19 12:04:34.500505 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-19 12:04:34.500585 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-09-19 12:04:34.500600 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-19 12:04:34.500610 | orchestrator | | 0257d462c54140be9a93ec4f9b909d4d | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-09-19 12:04:34.500628 | orchestrator | | 0422c04c7b5647a680a51bb88c244c65 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-09-19 12:04:34.500638 | orchestrator | | 064d3e9a1cb94180b812a49d101b03e6 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-09-19 12:04:34.500648 | orchestrator | | 0a9f3918998449cf9edefd81ae088f9b | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-09-19 12:04:34.500658 | orchestrator | | 0fc115b45fd94a4db3a1d7a2aaac853e | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-09-19 12:04:34.500667 | orchestrator | | 142ac813cd384c7f8b35990d2f811e95 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-09-19 12:04:34.500677 | orchestrator | | 2b02b2a600ca4edb99d60e0563d786ad | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-09-19 12:04:34.500686 | orchestrator | | 449373ab9e4e43f0b6dc59d7313e402b | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-09-19 12:04:34.500696 | orchestrator | | 5041aa7a44a444a59ff550f162b95dbd | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-19 12:04:34.500705 | orchestrator | | 5d04b90596de492aa22b47445f955292 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-09-19 12:04:34.500715 | orchestrator | | 5e398b2ed2934b598bb174fc5fddd7fd | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-19 12:04:34.500725 | orchestrator | | 9336ac1c74bb4ed29a15d3485b729b30 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-09-19 12:04:34.500734 | orchestrator | | 9b11d006d55a4bbaa102177b44732115 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-09-19 12:04:34.500761 | orchestrator | | b9a02e591a9741829305dd80c3af9007 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-09-19 12:04:34.500771 | orchestrator | | b9bff104e9544d65ba77698e65b63dec | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-09-19 12:04:34.500780 | orchestrator | | c0a8e5c4d92e4cc38d8c596a3b5931c8 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-09-19 12:04:34.500790 | orchestrator | | d02e3ceddec3457288929e6b0a2d5798 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-19 12:04:34.500799 | orchestrator | | df1b3e3f6ce448358e16bb78b94b10df | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-09-19 12:04:34.500809 | orchestrator | | f2b0f83a20da4bc6af5509fab8ad023b | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-09-19 12:04:34.500818 | orchestrator | | f587fcbcaa534fa79e1173d240f7e956 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-19 12:04:34.500842 | orchestrator | | f6b31dfa8e764eb7abb19fd24bb0a6bc | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-09-19 12:04:34.500852 | orchestrator | | fcede2a6b9054837ad4a3eeda06e5629 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-09-19 12:04:34.500862 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-19 12:04:34.729649 | orchestrator | 2025-09-19 12:04:34.729734 | orchestrator | # Cinder 2025-09-19 12:04:34.729748 | orchestrator | 2025-09-19 12:04:34.729760 | orchestrator | + echo 2025-09-19 12:04:34.729771 | orchestrator | + echo '# Cinder' 2025-09-19 12:04:34.729802 | orchestrator | + echo 2025-09-19 12:04:34.729814 | orchestrator | + openstack volume service list 2025-09-19 12:04:37.801462 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-19 12:04:37.801592 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-09-19 12:04:37.801638 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-19 12:04:37.801661 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-19T12:04:28.000000 | 2025-09-19 12:04:37.801678 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-19T12:04:28.000000 | 2025-09-19 12:04:37.801697 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-19T12:04:28.000000 | 2025-09-19 12:04:37.801715 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-09-19T12:04:28.000000 | 2025-09-19 12:04:37.801735 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-09-19T12:04:30.000000 | 2025-09-19 12:04:37.801763 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-09-19T12:04:31.000000 | 2025-09-19 12:04:37.801782 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-09-19T12:04:36.000000 | 2025-09-19 12:04:37.801799 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-09-19T12:04:37.000000 | 2025-09-19 12:04:37.801817 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-09-19T12:04:37.000000 | 2025-09-19 12:04:37.801835 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-19 12:04:38.057482 | orchestrator | 2025-09-19 12:04:38.057587 | orchestrator | # Neutron 2025-09-19 12:04:38.057601 | orchestrator | 2025-09-19 12:04:38.057615 | orchestrator | + echo 2025-09-19 12:04:38.057628 | orchestrator | + echo '# Neutron' 2025-09-19 12:04:38.057644 | orchestrator | + echo 2025-09-19 12:04:38.057657 | orchestrator | + openstack network agent list 2025-09-19 12:04:40.880387 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-19 12:04:40.880487 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-09-19 12:04:40.880503 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-19 12:04:40.880515 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-09-19 12:04:40.880527 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-09-19 12:04:40.880538 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-09-19 12:04:40.880548 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-09-19 12:04:40.880559 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-09-19 12:04:40.880570 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-09-19 12:04:40.880581 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-19 12:04:40.880592 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-19 12:04:40.880602 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-19 12:04:40.880613 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-19 12:04:41.148020 | orchestrator | + openstack network service provider list 2025-09-19 12:04:43.657963 | orchestrator | +---------------+------+---------+ 2025-09-19 12:04:43.658112 | orchestrator | | Service Type | Name | Default | 2025-09-19 12:04:43.658129 | orchestrator | +---------------+------+---------+ 2025-09-19 12:04:43.658141 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-09-19 12:04:43.658153 | orchestrator | +---------------+------+---------+ 2025-09-19 12:04:43.914460 | orchestrator | 2025-09-19 12:04:43.914552 | orchestrator | # Nova 2025-09-19 12:04:43.914566 | orchestrator | 2025-09-19 12:04:43.914578 | orchestrator | + echo 2025-09-19 12:04:43.914589 | orchestrator | + echo '# Nova' 2025-09-19 12:04:43.914601 | orchestrator | + echo 2025-09-19 12:04:43.914612 | orchestrator | + openstack compute service list 2025-09-19 12:04:46.846958 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-19 12:04:46.847048 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-09-19 12:04:46.847058 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-19 12:04:46.847065 | orchestrator | | d3c950f7-fbb7-41a3-a478-db77ab174d3f | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-19T12:04:43.000000 | 2025-09-19 12:04:46.847088 | orchestrator | | ec5a227c-b1b6-4738-9163-8cef2098030c | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-19T12:04:40.000000 | 2025-09-19 12:04:46.847122 | orchestrator | | c3a75908-7140-4a59-b82e-80a7c2eccdaf | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-19T12:04:42.000000 | 2025-09-19 12:04:46.847133 | orchestrator | | 16121ba7-9a44-4154-a506-41e750049c95 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-09-19T12:04:44.000000 | 2025-09-19 12:04:46.847144 | orchestrator | | 942768d1-7c0f-45e0-9b2f-9c30f1e83c3d | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-09-19T12:04:38.000000 | 2025-09-19 12:04:46.847155 | orchestrator | | 073dca66-4009-4368-b635-5f439af95f4d | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-09-19T12:04:39.000000 | 2025-09-19 12:04:46.847164 | orchestrator | | 50df55b1-1f76-41e0-bc9d-3c1cf90cc141 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-09-19T12:04:43.000000 | 2025-09-19 12:04:46.847175 | orchestrator | | 5633bdc2-a018-4f66-a2b0-59545f2fccf8 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-09-19T12:04:43.000000 | 2025-09-19 12:04:46.847185 | orchestrator | | 6c1ff73d-1a64-4916-8b6c-24ff665db46c | nova-compute | testbed-node-5 | nova | enabled | up | 2025-09-19T12:04:44.000000 | 2025-09-19 12:04:46.847195 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-19 12:04:47.192847 | orchestrator | + openstack hypervisor list 2025-09-19 12:04:49.869373 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-19 12:04:49.869463 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-09-19 12:04:49.869476 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-19 12:04:49.869487 | orchestrator | | 5380592c-1c30-4e7d-91de-960e73feda48 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-09-19 12:04:49.869496 | orchestrator | | b8f818ab-8b83-45bf-a9a9-a1aa5c1e5641 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-09-19 12:04:49.869506 | orchestrator | | e2722b7f-0179-4860-92d4-be9089b3eb35 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-09-19 12:04:49.869516 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-19 12:04:50.119897 | orchestrator | 2025-09-19 12:04:50.119978 | orchestrator | # Run OpenStack test play 2025-09-19 12:04:50.119989 | orchestrator | 2025-09-19 12:04:50.119998 | orchestrator | + echo 2025-09-19 12:04:50.120006 | orchestrator | + echo '# Run OpenStack test play' 2025-09-19 12:04:50.120015 | orchestrator | + echo 2025-09-19 12:04:50.120024 | orchestrator | + osism apply --environment openstack test 2025-09-19 12:04:52.239365 | orchestrator | 2025-09-19 12:04:52 | INFO  | Trying to run play test in environment openstack 2025-09-19 12:04:52.308505 | orchestrator | 2025-09-19 12:04:52 | INFO  | Task cdfa47a8-a8b9-4125-81f1-68e1f84e98c5 (test) was prepared for execution. 2025-09-19 12:04:52.308631 | orchestrator | 2025-09-19 12:04:52 | INFO  | It takes a moment until task cdfa47a8-a8b9-4125-81f1-68e1f84e98c5 (test) has been started and output is visible here. 2025-09-19 12:10:54.468381 | orchestrator | 2025-09-19 12:10:54.468494 | orchestrator | PLAY [Create test project] ***************************************************** 2025-09-19 12:10:54.468508 | orchestrator | 2025-09-19 12:10:54.468519 | orchestrator | TASK [Create test domain] ****************************************************** 2025-09-19 12:10:54.468529 | orchestrator | Friday 19 September 2025 12:04:56 +0000 (0:00:00.086) 0:00:00.086 ****** 2025-09-19 12:10:54.468539 | orchestrator | changed: [localhost] 2025-09-19 12:10:54.468550 | orchestrator | 2025-09-19 12:10:54.468560 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-09-19 12:10:54.468569 | orchestrator | Friday 19 September 2025 12:04:59 +0000 (0:00:03.610) 0:00:03.697 ****** 2025-09-19 12:10:54.468579 | orchestrator | changed: [localhost] 2025-09-19 12:10:54.468589 | orchestrator | 2025-09-19 12:10:54.468599 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-09-19 12:10:54.468631 | orchestrator | Friday 19 September 2025 12:05:04 +0000 (0:00:04.430) 0:00:08.128 ****** 2025-09-19 12:10:54.468641 | orchestrator | changed: [localhost] 2025-09-19 12:10:54.468650 | orchestrator | 2025-09-19 12:10:54.468660 | orchestrator | TASK [Create test project] ***************************************************** 2025-09-19 12:10:54.468692 | orchestrator | Friday 19 September 2025 12:05:10 +0000 (0:00:06.063) 0:00:14.191 ****** 2025-09-19 12:10:54.468703 | orchestrator | changed: [localhost] 2025-09-19 12:10:54.468712 | orchestrator | 2025-09-19 12:10:54.468722 | orchestrator | TASK [Create test user] ******************************************************** 2025-09-19 12:10:54.468731 | orchestrator | Friday 19 September 2025 12:05:14 +0000 (0:00:03.871) 0:00:18.063 ****** 2025-09-19 12:10:54.468741 | orchestrator | changed: [localhost] 2025-09-19 12:10:54.468750 | orchestrator | 2025-09-19 12:10:54.468760 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-09-19 12:10:54.468769 | orchestrator | Friday 19 September 2025 12:05:18 +0000 (0:00:04.028) 0:00:22.091 ****** 2025-09-19 12:10:54.468779 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-09-19 12:10:54.468789 | orchestrator | changed: [localhost] => (item=member) 2025-09-19 12:10:54.468800 | orchestrator | changed: [localhost] => (item=creator) 2025-09-19 12:10:54.468809 | orchestrator | 2025-09-19 12:10:54.468819 | orchestrator | TASK [Create test server group] ************************************************ 2025-09-19 12:10:54.468828 | orchestrator | Friday 19 September 2025 12:05:30 +0000 (0:00:11.964) 0:00:34.056 ****** 2025-09-19 12:10:54.468838 | orchestrator | changed: [localhost] 2025-09-19 12:10:54.468847 | orchestrator | 2025-09-19 12:10:54.468857 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-09-19 12:10:54.468866 | orchestrator | Friday 19 September 2025 12:05:34 +0000 (0:00:04.336) 0:00:38.392 ****** 2025-09-19 12:10:54.468875 | orchestrator | changed: [localhost] 2025-09-19 12:10:54.468885 | orchestrator | 2025-09-19 12:10:54.468894 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-09-19 12:10:54.468903 | orchestrator | Friday 19 September 2025 12:05:39 +0000 (0:00:04.678) 0:00:43.071 ****** 2025-09-19 12:10:54.468913 | orchestrator | changed: [localhost] 2025-09-19 12:10:54.468924 | orchestrator | 2025-09-19 12:10:54.468934 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-09-19 12:10:54.468945 | orchestrator | Friday 19 September 2025 12:05:43 +0000 (0:00:04.415) 0:00:47.487 ****** 2025-09-19 12:10:54.468956 | orchestrator | changed: [localhost] 2025-09-19 12:10:54.468966 | orchestrator | 2025-09-19 12:10:54.468976 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-09-19 12:10:54.468987 | orchestrator | Friday 19 September 2025 12:05:47 +0000 (0:00:04.128) 0:00:51.615 ****** 2025-09-19 12:10:54.468997 | orchestrator | changed: [localhost] 2025-09-19 12:10:54.469008 | orchestrator | 2025-09-19 12:10:54.469018 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-09-19 12:10:54.469028 | orchestrator | Friday 19 September 2025 12:05:51 +0000 (0:00:04.036) 0:00:55.652 ****** 2025-09-19 12:10:54.469038 | orchestrator | changed: [localhost] 2025-09-19 12:10:54.469048 | orchestrator | 2025-09-19 12:10:54.469059 | orchestrator | TASK [Create test network topology] ******************************************** 2025-09-19 12:10:54.469069 | orchestrator | Friday 19 September 2025 12:05:56 +0000 (0:00:04.467) 0:01:00.119 ****** 2025-09-19 12:10:54.469080 | orchestrator | changed: [localhost] 2025-09-19 12:10:54.469091 | orchestrator | 2025-09-19 12:10:54.469101 | orchestrator | TASK [Create test instances] *************************************************** 2025-09-19 12:10:54.469112 | orchestrator | Friday 19 September 2025 12:06:11 +0000 (0:00:15.357) 0:01:15.477 ****** 2025-09-19 12:10:54.469122 | orchestrator | changed: [localhost] => (item=test) 2025-09-19 12:10:54.469133 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-19 12:10:54.469143 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-19 12:10:54.469152 | orchestrator | 2025-09-19 12:10:54.469162 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-19 12:10:54.469179 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-19 12:10:54.469188 | orchestrator | 2025-09-19 12:10:54.469198 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-19 12:10:54.469207 | orchestrator | 2025-09-19 12:10:54.469232 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-19 12:10:54.469243 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-19 12:10:54.469252 | orchestrator | 2025-09-19 12:10:54.469262 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-09-19 12:10:54.469271 | orchestrator | Friday 19 September 2025 12:09:33 +0000 (0:03:21.763) 0:04:37.240 ****** 2025-09-19 12:10:54.469280 | orchestrator | changed: [localhost] => (item=test) 2025-09-19 12:10:54.469290 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-19 12:10:54.469299 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-19 12:10:54.469309 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-19 12:10:54.469318 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-19 12:10:54.469331 | orchestrator | 2025-09-19 12:10:54.469341 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-09-19 12:10:54.469351 | orchestrator | Friday 19 September 2025 12:09:56 +0000 (0:00:23.365) 0:05:00.606 ****** 2025-09-19 12:10:54.469402 | orchestrator | changed: [localhost] => (item=test) 2025-09-19 12:10:54.469413 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-19 12:10:54.469438 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-19 12:10:54.469448 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-19 12:10:54.469458 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-19 12:10:54.469467 | orchestrator | 2025-09-19 12:10:54.469477 | orchestrator | TASK [Create test volume] ****************************************************** 2025-09-19 12:10:54.469486 | orchestrator | Friday 19 September 2025 12:10:28 +0000 (0:00:32.049) 0:05:32.655 ****** 2025-09-19 12:10:54.469496 | orchestrator | changed: [localhost] 2025-09-19 12:10:54.469505 | orchestrator | 2025-09-19 12:10:54.469515 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-09-19 12:10:54.469524 | orchestrator | Friday 19 September 2025 12:10:35 +0000 (0:00:06.798) 0:05:39.454 ****** 2025-09-19 12:10:54.469534 | orchestrator | changed: [localhost] 2025-09-19 12:10:54.469543 | orchestrator | 2025-09-19 12:10:54.469553 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-09-19 12:10:54.469562 | orchestrator | Friday 19 September 2025 12:10:49 +0000 (0:00:13.528) 0:05:52.982 ****** 2025-09-19 12:10:54.469572 | orchestrator | ok: [localhost] 2025-09-19 12:10:54.469583 | orchestrator | 2025-09-19 12:10:54.469593 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-09-19 12:10:54.469602 | orchestrator | Friday 19 September 2025 12:10:54 +0000 (0:00:05.006) 0:05:57.989 ****** 2025-09-19 12:10:54.469612 | orchestrator | ok: [localhost] => { 2025-09-19 12:10:54.469621 | orchestrator |  "msg": "192.168.112.109" 2025-09-19 12:10:54.469631 | orchestrator | } 2025-09-19 12:10:54.469641 | orchestrator | 2025-09-19 12:10:54.469650 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 12:10:54.469660 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 12:10:54.469708 | orchestrator | 2025-09-19 12:10:54.469719 | orchestrator | 2025-09-19 12:10:54.469728 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 12:10:54.469738 | orchestrator | Friday 19 September 2025 12:10:54 +0000 (0:00:00.047) 0:05:58.036 ****** 2025-09-19 12:10:54.469747 | orchestrator | =============================================================================== 2025-09-19 12:10:54.469757 | orchestrator | Create test instances ------------------------------------------------- 201.76s 2025-09-19 12:10:54.469766 | orchestrator | Add tag to instances --------------------------------------------------- 32.05s 2025-09-19 12:10:54.469781 | orchestrator | Add metadata to instances ---------------------------------------------- 23.37s 2025-09-19 12:10:54.469799 | orchestrator | Create test network topology ------------------------------------------- 15.36s 2025-09-19 12:10:54.469809 | orchestrator | Attach test volume ----------------------------------------------------- 13.53s 2025-09-19 12:10:54.469819 | orchestrator | Add member roles to user test ------------------------------------------ 11.97s 2025-09-19 12:10:54.469828 | orchestrator | Create test volume ------------------------------------------------------ 6.80s 2025-09-19 12:10:54.469838 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.06s 2025-09-19 12:10:54.469847 | orchestrator | Create floating ip address ---------------------------------------------- 5.01s 2025-09-19 12:10:54.469856 | orchestrator | Create ssh security group ----------------------------------------------- 4.68s 2025-09-19 12:10:54.469866 | orchestrator | Create test keypair ----------------------------------------------------- 4.47s 2025-09-19 12:10:54.469875 | orchestrator | Create test-admin user -------------------------------------------------- 4.43s 2025-09-19 12:10:54.469885 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.42s 2025-09-19 12:10:54.469894 | orchestrator | Create test server group ------------------------------------------------ 4.34s 2025-09-19 12:10:54.469904 | orchestrator | Create icmp security group ---------------------------------------------- 4.13s 2025-09-19 12:10:54.469913 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.04s 2025-09-19 12:10:54.469922 | orchestrator | Create test user -------------------------------------------------------- 4.03s 2025-09-19 12:10:54.469932 | orchestrator | Create test project ----------------------------------------------------- 3.87s 2025-09-19 12:10:54.469941 | orchestrator | Create test domain ------------------------------------------------------ 3.61s 2025-09-19 12:10:54.469951 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-09-19 12:10:54.853772 | orchestrator | + server_list 2025-09-19 12:10:54.853869 | orchestrator | + openstack --os-cloud test server list 2025-09-19 12:10:58.446443 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-09-19 12:10:58.446568 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-09-19 12:10:58.446592 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-09-19 12:10:58.446611 | orchestrator | | d2008e6c-45ed-4452-a036-1b38898fc77d | test-4 | ACTIVE | auto_allocated_network=10.42.0.18, 192.168.112.114 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-09-19 12:10:58.446629 | orchestrator | | b40efd5b-242c-4482-a55b-036ebd5fd3d5 | test-3 | ACTIVE | auto_allocated_network=10.42.0.37, 192.168.112.179 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-09-19 12:10:58.446647 | orchestrator | | f3ef682c-28a5-44ff-9acf-895322eb6953 | test-2 | ACTIVE | auto_allocated_network=10.42.0.46, 192.168.112.117 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-09-19 12:10:58.446666 | orchestrator | | b01631eb-3495-4ca0-9e67-c6b346ed3f9c | test-1 | ACTIVE | auto_allocated_network=10.42.0.13, 192.168.112.144 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-09-19 12:10:58.446762 | orchestrator | | 25d1d02a-ced4-490b-b700-0cd2ce49984c | test | ACTIVE | auto_allocated_network=10.42.0.41, 192.168.112.109 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-09-19 12:10:58.446788 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-09-19 12:10:58.729859 | orchestrator | + openstack --os-cloud test server show test 2025-09-19 12:11:02.108062 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 12:11:02.108204 | orchestrator | | Field | Value | 2025-09-19 12:11:02.108249 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 12:11:02.108269 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-19 12:11:02.108280 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-19 12:11:02.108291 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-19 12:11:02.108303 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-09-19 12:11:02.108314 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-19 12:11:02.108325 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-19 12:11:02.108354 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-19 12:11:02.108366 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-19 12:11:02.108384 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-19 12:11:02.108399 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-19 12:11:02.108411 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-19 12:11:02.108422 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-19 12:11:02.108433 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-19 12:11:02.108444 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-19 12:11:02.108455 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-19 12:11:02.108466 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-19T12:06:41.000000 | 2025-09-19 12:11:02.108487 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-19 12:11:02.108510 | orchestrator | | accessIPv4 | | 2025-09-19 12:11:02.108521 | orchestrator | | accessIPv6 | | 2025-09-19 12:11:02.108536 | orchestrator | | addresses | auto_allocated_network=10.42.0.41, 192.168.112.109 | 2025-09-19 12:11:02.108547 | orchestrator | | config_drive | | 2025-09-19 12:11:02.108560 | orchestrator | | created | 2025-09-19T12:06:20Z | 2025-09-19 12:11:02.108572 | orchestrator | | description | None | 2025-09-19 12:11:02.108585 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:5', extra_specs.scs:name-v2='SCS-1L-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-19 12:11:02.108597 | orchestrator | | hostId | a2ca7f4ec9111e76e4c76d68d4061f88416426f4b3cac7cf19984d8d | 2025-09-19 12:11:02.108609 | orchestrator | | host_status | None | 2025-09-19 12:11:02.108634 | orchestrator | | id | 25d1d02a-ced4-490b-b700-0cd2ce49984c | 2025-09-19 12:11:02.108647 | orchestrator | | image | Cirros 0.6.2 (39cc8413-d2ee-4a37-9ebf-aae844bb00b4) | 2025-09-19 12:11:02.108660 | orchestrator | | key_name | test | 2025-09-19 12:11:02.108673 | orchestrator | | locked | False | 2025-09-19 12:11:02.108692 | orchestrator | | locked_reason | None | 2025-09-19 12:11:02.108730 | orchestrator | | name | test | 2025-09-19 12:11:02.108742 | orchestrator | | pinned_availability_zone | None | 2025-09-19 12:11:02.108753 | orchestrator | | progress | 0 | 2025-09-19 12:11:02.108764 | orchestrator | | project_id | e12af0bc7ff741b696bf7d024c06a74d | 2025-09-19 12:11:02.108782 | orchestrator | | properties | hostname='test' | 2025-09-19 12:11:02.108799 | orchestrator | | security_groups | name='icmp' | 2025-09-19 12:11:02.108811 | orchestrator | | | name='ssh' | 2025-09-19 12:11:02.108822 | orchestrator | | server_groups | None | 2025-09-19 12:11:02.108838 | orchestrator | | status | ACTIVE | 2025-09-19 12:11:02.108849 | orchestrator | | tags | test | 2025-09-19 12:11:02.108860 | orchestrator | | trusted_image_certificates | None | 2025-09-19 12:11:02.108871 | orchestrator | | updated | 2025-09-19T12:09:38Z | 2025-09-19 12:11:02.108882 | orchestrator | | user_id | 985140de83214006af1ef74d287217bc | 2025-09-19 12:11:02.108899 | orchestrator | | volumes_attached | delete_on_termination='False', id='4d3d604e-56bd-40e4-b238-1ad2ac833d79' | 2025-09-19 12:11:02.114110 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 12:11:02.425553 | orchestrator | + openstack --os-cloud test server show test-1 2025-09-19 12:11:05.732620 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 12:11:05.732754 | orchestrator | | Field | Value | 2025-09-19 12:11:05.732780 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 12:11:05.732792 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-19 12:11:05.732804 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-19 12:11:05.732816 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-19 12:11:05.732827 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-09-19 12:11:05.732857 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-19 12:11:05.732869 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-19 12:11:05.732898 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-19 12:11:05.732910 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-19 12:11:05.732921 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-19 12:11:05.732936 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-19 12:11:05.732948 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-19 12:11:05.732959 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-19 12:11:05.732970 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-19 12:11:05.732988 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-19 12:11:05.732999 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-19 12:11:05.733010 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-19T12:07:24.000000 | 2025-09-19 12:11:05.733029 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-19 12:11:05.733041 | orchestrator | | accessIPv4 | | 2025-09-19 12:11:05.733052 | orchestrator | | accessIPv6 | | 2025-09-19 12:11:05.733067 | orchestrator | | addresses | auto_allocated_network=10.42.0.13, 192.168.112.144 | 2025-09-19 12:11:05.733078 | orchestrator | | config_drive | | 2025-09-19 12:11:05.733090 | orchestrator | | created | 2025-09-19T12:07:02Z | 2025-09-19 12:11:05.733106 | orchestrator | | description | None | 2025-09-19 12:11:05.733117 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:5', extra_specs.scs:name-v2='SCS-1L-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-19 12:11:05.733128 | orchestrator | | hostId | df85dcfba1387929861518c36d58a18bc2b332fd35c7e3441c4e912b | 2025-09-19 12:11:05.733139 | orchestrator | | host_status | None | 2025-09-19 12:11:05.733157 | orchestrator | | id | b01631eb-3495-4ca0-9e67-c6b346ed3f9c | 2025-09-19 12:11:05.733170 | orchestrator | | image | Cirros 0.6.2 (39cc8413-d2ee-4a37-9ebf-aae844bb00b4) | 2025-09-19 12:11:05.733183 | orchestrator | | key_name | test | 2025-09-19 12:11:05.733197 | orchestrator | | locked | False | 2025-09-19 12:11:05.733210 | orchestrator | | locked_reason | None | 2025-09-19 12:11:05.733223 | orchestrator | | name | test-1 | 2025-09-19 12:11:05.733243 | orchestrator | | pinned_availability_zone | None | 2025-09-19 12:11:05.733262 | orchestrator | | progress | 0 | 2025-09-19 12:11:05.733275 | orchestrator | | project_id | e12af0bc7ff741b696bf7d024c06a74d | 2025-09-19 12:11:05.733287 | orchestrator | | properties | hostname='test-1' | 2025-09-19 12:11:05.733304 | orchestrator | | security_groups | name='icmp' | 2025-09-19 12:11:05.733316 | orchestrator | | | name='ssh' | 2025-09-19 12:11:05.733332 | orchestrator | | server_groups | None | 2025-09-19 12:11:05.733343 | orchestrator | | status | ACTIVE | 2025-09-19 12:11:05.733359 | orchestrator | | tags | test | 2025-09-19 12:11:05.733394 | orchestrator | | trusted_image_certificates | None | 2025-09-19 12:11:05.733424 | orchestrator | | updated | 2025-09-19T12:09:42Z | 2025-09-19 12:11:05.733444 | orchestrator | | user_id | 985140de83214006af1ef74d287217bc | 2025-09-19 12:11:05.733463 | orchestrator | | volumes_attached | | 2025-09-19 12:11:05.736382 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 12:11:05.996057 | orchestrator | + openstack --os-cloud test server show test-2 2025-09-19 12:11:09.207547 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 12:11:09.207649 | orchestrator | | Field | Value | 2025-09-19 12:11:09.207682 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 12:11:09.207702 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-19 12:11:09.207825 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-19 12:11:09.207851 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-19 12:11:09.207869 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-09-19 12:11:09.207887 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-19 12:11:09.207905 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-19 12:11:09.207946 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-19 12:11:09.207965 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-19 12:11:09.207983 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-19 12:11:09.208011 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-19 12:11:09.208046 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-19 12:11:09.208065 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-19 12:11:09.208085 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-19 12:11:09.208105 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-19 12:11:09.208125 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-19 12:11:09.208144 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-19T12:08:05.000000 | 2025-09-19 12:11:09.208175 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-19 12:11:09.208196 | orchestrator | | accessIPv4 | | 2025-09-19 12:11:09.208224 | orchestrator | | accessIPv6 | | 2025-09-19 12:11:09.208255 | orchestrator | | addresses | auto_allocated_network=10.42.0.46, 192.168.112.117 | 2025-09-19 12:11:09.208273 | orchestrator | | config_drive | | 2025-09-19 12:11:09.208292 | orchestrator | | created | 2025-09-19T12:07:43Z | 2025-09-19 12:11:09.208309 | orchestrator | | description | None | 2025-09-19 12:11:09.208327 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:5', extra_specs.scs:name-v2='SCS-1L-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-19 12:11:09.208345 | orchestrator | | hostId | 4b930cf088e584325532b2b61e19d813a6a108e7e1907be67ce6c748 | 2025-09-19 12:11:09.208363 | orchestrator | | host_status | None | 2025-09-19 12:11:09.208394 | orchestrator | | id | f3ef682c-28a5-44ff-9acf-895322eb6953 | 2025-09-19 12:11:09.208414 | orchestrator | | image | Cirros 0.6.2 (39cc8413-d2ee-4a37-9ebf-aae844bb00b4) | 2025-09-19 12:11:09.208452 | orchestrator | | key_name | test | 2025-09-19 12:11:09.208472 | orchestrator | | locked | False | 2025-09-19 12:11:09.208491 | orchestrator | | locked_reason | None | 2025-09-19 12:11:09.208512 | orchestrator | | name | test-2 | 2025-09-19 12:11:09.208530 | orchestrator | | pinned_availability_zone | None | 2025-09-19 12:11:09.208549 | orchestrator | | progress | 0 | 2025-09-19 12:11:09.208568 | orchestrator | | project_id | e12af0bc7ff741b696bf7d024c06a74d | 2025-09-19 12:11:09.208588 | orchestrator | | properties | hostname='test-2' | 2025-09-19 12:11:09.208617 | orchestrator | | security_groups | name='icmp' | 2025-09-19 12:11:09.208648 | orchestrator | | | name='ssh' | 2025-09-19 12:11:09.208665 | orchestrator | | server_groups | None | 2025-09-19 12:11:09.208683 | orchestrator | | status | ACTIVE | 2025-09-19 12:11:09.208701 | orchestrator | | tags | test | 2025-09-19 12:11:09.208747 | orchestrator | | trusted_image_certificates | None | 2025-09-19 12:11:09.208768 | orchestrator | | updated | 2025-09-19T12:09:47Z | 2025-09-19 12:11:09.208799 | orchestrator | | user_id | 985140de83214006af1ef74d287217bc | 2025-09-19 12:11:09.208819 | orchestrator | | volumes_attached | | 2025-09-19 12:11:09.211929 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 12:11:09.477996 | orchestrator | + openstack --os-cloud test server show test-3 2025-09-19 12:11:12.666673 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 12:11:12.666869 | orchestrator | | Field | Value | 2025-09-19 12:11:12.666909 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 12:11:12.666931 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-19 12:11:12.666950 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-19 12:11:12.666969 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-19 12:11:12.666988 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-09-19 12:11:12.667006 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-19 12:11:12.667025 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-19 12:11:12.667095 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-19 12:11:12.667117 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-19 12:11:12.667145 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-19 12:11:12.667166 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-19 12:11:12.667184 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-19 12:11:12.667205 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-19 12:11:12.667224 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-19 12:11:12.667244 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-19 12:11:12.667258 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-19 12:11:12.667283 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-19T12:08:38.000000 | 2025-09-19 12:11:12.667307 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-19 12:11:12.667320 | orchestrator | | accessIPv4 | | 2025-09-19 12:11:12.667338 | orchestrator | | accessIPv6 | | 2025-09-19 12:11:12.667351 | orchestrator | | addresses | auto_allocated_network=10.42.0.37, 192.168.112.179 | 2025-09-19 12:11:12.667364 | orchestrator | | config_drive | | 2025-09-19 12:11:12.667376 | orchestrator | | created | 2025-09-19T12:08:22Z | 2025-09-19 12:11:12.667389 | orchestrator | | description | None | 2025-09-19 12:11:12.667402 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:5', extra_specs.scs:name-v2='SCS-1L-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-19 12:11:12.667422 | orchestrator | | hostId | a2ca7f4ec9111e76e4c76d68d4061f88416426f4b3cac7cf19984d8d | 2025-09-19 12:11:12.667434 | orchestrator | | host_status | None | 2025-09-19 12:11:12.667453 | orchestrator | | id | b40efd5b-242c-4482-a55b-036ebd5fd3d5 | 2025-09-19 12:11:12.667465 | orchestrator | | image | Cirros 0.6.2 (39cc8413-d2ee-4a37-9ebf-aae844bb00b4) | 2025-09-19 12:11:12.667483 | orchestrator | | key_name | test | 2025-09-19 12:11:12.667496 | orchestrator | | locked | False | 2025-09-19 12:11:12.667509 | orchestrator | | locked_reason | None | 2025-09-19 12:11:12.667522 | orchestrator | | name | test-3 | 2025-09-19 12:11:12.667535 | orchestrator | | pinned_availability_zone | None | 2025-09-19 12:11:12.667548 | orchestrator | | progress | 0 | 2025-09-19 12:11:12.667567 | orchestrator | | project_id | e12af0bc7ff741b696bf7d024c06a74d | 2025-09-19 12:11:12.667580 | orchestrator | | properties | hostname='test-3' | 2025-09-19 12:11:12.667598 | orchestrator | | security_groups | name='icmp' | 2025-09-19 12:11:12.667610 | orchestrator | | | name='ssh' | 2025-09-19 12:11:12.667626 | orchestrator | | server_groups | None | 2025-09-19 12:11:12.667637 | orchestrator | | status | ACTIVE | 2025-09-19 12:11:12.667648 | orchestrator | | tags | test | 2025-09-19 12:11:12.667660 | orchestrator | | trusted_image_certificates | None | 2025-09-19 12:11:12.667670 | orchestrator | | updated | 2025-09-19T12:09:52Z | 2025-09-19 12:11:12.667688 | orchestrator | | user_id | 985140de83214006af1ef74d287217bc | 2025-09-19 12:11:12.667700 | orchestrator | | volumes_attached | | 2025-09-19 12:11:12.670875 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 12:11:12.936808 | orchestrator | + openstack --os-cloud test server show test-4 2025-09-19 12:11:16.186947 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 12:11:16.187064 | orchestrator | | Field | Value | 2025-09-19 12:11:16.187080 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 12:11:16.187092 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-19 12:11:16.187103 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-19 12:11:16.187115 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-19 12:11:16.187147 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-09-19 12:11:16.187159 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-19 12:11:16.187170 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-19 12:11:16.187199 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-19 12:11:16.187220 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-19 12:11:16.187239 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-19 12:11:16.187259 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-19 12:11:16.187278 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-19 12:11:16.187696 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-19 12:11:16.187727 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-19 12:11:16.187779 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-19 12:11:16.187800 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-19 12:11:16.187827 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-19T12:09:17.000000 | 2025-09-19 12:11:16.187852 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-19 12:11:16.187864 | orchestrator | | accessIPv4 | | 2025-09-19 12:11:16.187875 | orchestrator | | accessIPv6 | | 2025-09-19 12:11:16.187887 | orchestrator | | addresses | auto_allocated_network=10.42.0.18, 192.168.112.114 | 2025-09-19 12:11:16.187898 | orchestrator | | config_drive | | 2025-09-19 12:11:16.187917 | orchestrator | | created | 2025-09-19T12:09:00Z | 2025-09-19 12:11:16.187928 | orchestrator | | description | None | 2025-09-19 12:11:16.187940 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:5', extra_specs.scs:name-v2='SCS-1L-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-19 12:11:16.187951 | orchestrator | | hostId | df85dcfba1387929861518c36d58a18bc2b332fd35c7e3441c4e912b | 2025-09-19 12:11:16.187967 | orchestrator | | host_status | None | 2025-09-19 12:11:16.187985 | orchestrator | | id | d2008e6c-45ed-4452-a036-1b38898fc77d | 2025-09-19 12:11:16.187997 | orchestrator | | image | Cirros 0.6.2 (39cc8413-d2ee-4a37-9ebf-aae844bb00b4) | 2025-09-19 12:11:16.188008 | orchestrator | | key_name | test | 2025-09-19 12:11:16.188019 | orchestrator | | locked | False | 2025-09-19 12:11:16.188038 | orchestrator | | locked_reason | None | 2025-09-19 12:11:16.188049 | orchestrator | | name | test-4 | 2025-09-19 12:11:16.188060 | orchestrator | | pinned_availability_zone | None | 2025-09-19 12:11:16.188072 | orchestrator | | progress | 0 | 2025-09-19 12:11:16.188083 | orchestrator | | project_id | e12af0bc7ff741b696bf7d024c06a74d | 2025-09-19 12:11:16.188099 | orchestrator | | properties | hostname='test-4' | 2025-09-19 12:11:16.188117 | orchestrator | | security_groups | name='icmp' | 2025-09-19 12:11:16.188129 | orchestrator | | | name='ssh' | 2025-09-19 12:11:16.188141 | orchestrator | | server_groups | None | 2025-09-19 12:11:16.188158 | orchestrator | | status | ACTIVE | 2025-09-19 12:11:16.188170 | orchestrator | | tags | test | 2025-09-19 12:11:16.188181 | orchestrator | | trusted_image_certificates | None | 2025-09-19 12:11:16.188192 | orchestrator | | updated | 2025-09-19T12:09:56Z | 2025-09-19 12:11:16.188206 | orchestrator | | user_id | 985140de83214006af1ef74d287217bc | 2025-09-19 12:11:16.188231 | orchestrator | | volumes_attached | | 2025-09-19 12:11:16.192181 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 12:11:16.529437 | orchestrator | + server_ping 2025-09-19 12:11:16.531220 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-19 12:11:16.531265 | orchestrator | ++ tr -d '\r' 2025-09-19 12:11:19.528539 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:11:19.528643 | orchestrator | + ping -c3 192.168.112.179 2025-09-19 12:11:19.548792 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2025-09-19 12:11:19.548861 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=8.51 ms 2025-09-19 12:11:20.544640 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.31 ms 2025-09-19 12:11:21.547106 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.96 ms 2025-09-19 12:11:21.547204 | orchestrator | 2025-09-19 12:11:21.547262 | orchestrator | --- 192.168.112.179 ping statistics --- 2025-09-19 12:11:21.547277 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-19 12:11:21.547288 | orchestrator | rtt min/avg/max/mdev = 1.961/4.261/8.513/3.009 ms 2025-09-19 12:11:21.547299 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:11:21.547446 | orchestrator | + ping -c3 192.168.112.117 2025-09-19 12:11:21.561695 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-09-19 12:11:21.561736 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=9.38 ms 2025-09-19 12:11:22.556064 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.56 ms 2025-09-19 12:11:23.558303 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=2.22 ms 2025-09-19 12:11:23.558401 | orchestrator | 2025-09-19 12:11:23.558417 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-09-19 12:11:23.558429 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 12:11:23.558440 | orchestrator | rtt min/avg/max/mdev = 2.217/4.720/9.384/3.300 ms 2025-09-19 12:11:23.558452 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:11:23.558463 | orchestrator | + ping -c3 192.168.112.114 2025-09-19 12:11:23.571227 | orchestrator | PING 192.168.112.114 (192.168.112.114) 56(84) bytes of data. 2025-09-19 12:11:23.571308 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=1 ttl=63 time=9.82 ms 2025-09-19 12:11:24.566362 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=2 ttl=63 time=2.82 ms 2025-09-19 12:11:25.566972 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=3 ttl=63 time=1.97 ms 2025-09-19 12:11:25.567058 | orchestrator | 2025-09-19 12:11:25.567071 | orchestrator | --- 192.168.112.114 ping statistics --- 2025-09-19 12:11:25.567082 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 12:11:25.567091 | orchestrator | rtt min/avg/max/mdev = 1.969/4.867/9.819/3.518 ms 2025-09-19 12:11:25.567619 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:11:25.567644 | orchestrator | + ping -c3 192.168.112.144 2025-09-19 12:11:25.583032 | orchestrator | PING 192.168.112.144 (192.168.112.144) 56(84) bytes of data. 2025-09-19 12:11:25.583103 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=1 ttl=63 time=10.5 ms 2025-09-19 12:11:26.577159 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=2 ttl=63 time=2.80 ms 2025-09-19 12:11:27.579037 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=3 ttl=63 time=2.29 ms 2025-09-19 12:11:27.579135 | orchestrator | 2025-09-19 12:11:27.579150 | orchestrator | --- 192.168.112.144 ping statistics --- 2025-09-19 12:11:27.579162 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-19 12:11:27.579173 | orchestrator | rtt min/avg/max/mdev = 2.287/5.180/10.459/3.738 ms 2025-09-19 12:11:27.579747 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:11:27.579801 | orchestrator | + ping -c3 192.168.112.109 2025-09-19 12:11:27.594116 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2025-09-19 12:11:27.594207 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=9.21 ms 2025-09-19 12:11:28.588230 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.11 ms 2025-09-19 12:11:29.590317 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.87 ms 2025-09-19 12:11:29.590606 | orchestrator | 2025-09-19 12:11:29.590645 | orchestrator | --- 192.168.112.109 ping statistics --- 2025-09-19 12:11:29.590668 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 12:11:29.590689 | orchestrator | rtt min/avg/max/mdev = 1.867/4.396/9.213/3.407 ms 2025-09-19 12:11:29.590727 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-19 12:11:29.590750 | orchestrator | + compute_list 2025-09-19 12:11:29.590771 | orchestrator | + osism manage compute list testbed-node-3 2025-09-19 12:11:33.099103 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 12:11:33.099207 | orchestrator | | ID | Name | Status | 2025-09-19 12:11:33.099222 | orchestrator | |--------------------------------------+--------+----------| 2025-09-19 12:11:33.099234 | orchestrator | | f3ef682c-28a5-44ff-9acf-895322eb6953 | test-2 | ACTIVE | 2025-09-19 12:11:33.099272 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 12:11:33.511267 | orchestrator | + osism manage compute list testbed-node-4 2025-09-19 12:11:37.010449 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 12:11:37.010556 | orchestrator | | ID | Name | Status | 2025-09-19 12:11:37.010571 | orchestrator | |--------------------------------------+--------+----------| 2025-09-19 12:11:37.010583 | orchestrator | | b40efd5b-242c-4482-a55b-036ebd5fd3d5 | test-3 | ACTIVE | 2025-09-19 12:11:37.010610 | orchestrator | | 25d1d02a-ced4-490b-b700-0cd2ce49984c | test | ACTIVE | 2025-09-19 12:11:37.010622 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 12:11:37.337294 | orchestrator | + osism manage compute list testbed-node-5 2025-09-19 12:11:40.662273 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 12:11:40.662386 | orchestrator | | ID | Name | Status | 2025-09-19 12:11:40.662401 | orchestrator | |--------------------------------------+--------+----------| 2025-09-19 12:11:40.662413 | orchestrator | | d2008e6c-45ed-4452-a036-1b38898fc77d | test-4 | ACTIVE | 2025-09-19 12:11:40.662425 | orchestrator | | b01631eb-3495-4ca0-9e67-c6b346ed3f9c | test-1 | ACTIVE | 2025-09-19 12:11:40.662436 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 12:11:40.976515 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-09-19 12:11:44.090548 | orchestrator | 2025-09-19 12:11:44 | INFO  | Live migrating server b40efd5b-242c-4482-a55b-036ebd5fd3d5 2025-09-19 12:11:57.743040 | orchestrator | 2025-09-19 12:11:57 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:12:00.353444 | orchestrator | 2025-09-19 12:12:00 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:12:02.910243 | orchestrator | 2025-09-19 12:12:02 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:12:05.193967 | orchestrator | 2025-09-19 12:12:05 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:12:07.595659 | orchestrator | 2025-09-19 12:12:07 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:12:09.848273 | orchestrator | 2025-09-19 12:12:09 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:12:12.187557 | orchestrator | 2025-09-19 12:12:12 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:12:14.509067 | orchestrator | 2025-09-19 12:12:14 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) completed with status ACTIVE 2025-09-19 12:12:14.509165 | orchestrator | 2025-09-19 12:12:14 | INFO  | Live migrating server 25d1d02a-ced4-490b-b700-0cd2ce49984c 2025-09-19 12:12:25.381764 | orchestrator | 2025-09-19 12:12:25 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:12:28.013333 | orchestrator | 2025-09-19 12:12:28 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:12:30.422495 | orchestrator | 2025-09-19 12:12:30 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:12:32.784410 | orchestrator | 2025-09-19 12:12:32 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:12:35.092170 | orchestrator | 2025-09-19 12:12:35 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:12:37.370588 | orchestrator | 2025-09-19 12:12:37 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:12:39.704140 | orchestrator | 2025-09-19 12:12:39 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:12:41.996152 | orchestrator | 2025-09-19 12:12:41 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:12:44.274928 | orchestrator | 2025-09-19 12:12:44 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:12:46.647447 | orchestrator | 2025-09-19 12:12:46 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) completed with status ACTIVE 2025-09-19 12:12:46.963771 | orchestrator | + compute_list 2025-09-19 12:12:46.963876 | orchestrator | + osism manage compute list testbed-node-3 2025-09-19 12:12:50.160685 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 12:12:50.160787 | orchestrator | | ID | Name | Status | 2025-09-19 12:12:50.160801 | orchestrator | |--------------------------------------+--------+----------| 2025-09-19 12:12:50.160812 | orchestrator | | b40efd5b-242c-4482-a55b-036ebd5fd3d5 | test-3 | ACTIVE | 2025-09-19 12:12:50.160823 | orchestrator | | f3ef682c-28a5-44ff-9acf-895322eb6953 | test-2 | ACTIVE | 2025-09-19 12:12:50.160843 | orchestrator | | 25d1d02a-ced4-490b-b700-0cd2ce49984c | test | ACTIVE | 2025-09-19 12:12:50.160867 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 12:12:50.496603 | orchestrator | + osism manage compute list testbed-node-4 2025-09-19 12:12:53.245953 | orchestrator | +------+--------+----------+ 2025-09-19 12:12:53.246150 | orchestrator | | ID | Name | Status | 2025-09-19 12:12:53.246169 | orchestrator | |------+--------+----------| 2025-09-19 12:12:53.246181 | orchestrator | +------+--------+----------+ 2025-09-19 12:12:53.541471 | orchestrator | + osism manage compute list testbed-node-5 2025-09-19 12:12:56.999652 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 12:12:56.999756 | orchestrator | | ID | Name | Status | 2025-09-19 12:12:56.999771 | orchestrator | |--------------------------------------+--------+----------| 2025-09-19 12:12:56.999782 | orchestrator | | d2008e6c-45ed-4452-a036-1b38898fc77d | test-4 | ACTIVE | 2025-09-19 12:12:56.999794 | orchestrator | | b01631eb-3495-4ca0-9e67-c6b346ed3f9c | test-1 | ACTIVE | 2025-09-19 12:12:56.999805 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 12:12:57.314287 | orchestrator | + server_ping 2025-09-19 12:12:57.316275 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-19 12:12:57.316369 | orchestrator | ++ tr -d '\r' 2025-09-19 12:13:00.055606 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:13:00.055715 | orchestrator | + ping -c3 192.168.112.179 2025-09-19 12:13:00.066179 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2025-09-19 12:13:00.066208 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=6.61 ms 2025-09-19 12:13:01.064128 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.44 ms 2025-09-19 12:13:02.064084 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.66 ms 2025-09-19 12:13:02.064185 | orchestrator | 2025-09-19 12:13:02.064201 | orchestrator | --- 192.168.112.179 ping statistics --- 2025-09-19 12:13:02.064214 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-19 12:13:02.064225 | orchestrator | rtt min/avg/max/mdev = 1.660/3.570/6.611/2.173 ms 2025-09-19 12:13:02.064402 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:13:02.064434 | orchestrator | + ping -c3 192.168.112.117 2025-09-19 12:13:02.072996 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-09-19 12:13:02.073030 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=6.43 ms 2025-09-19 12:13:03.071354 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.61 ms 2025-09-19 12:13:04.071798 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=2.13 ms 2025-09-19 12:13:04.071953 | orchestrator | 2025-09-19 12:13:04.071991 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-09-19 12:13:04.072000 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 12:13:04.072007 | orchestrator | rtt min/avg/max/mdev = 2.133/3.723/6.425/1.920 ms 2025-09-19 12:13:04.074423 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:13:04.074468 | orchestrator | + ping -c3 192.168.112.114 2025-09-19 12:13:04.086583 | orchestrator | PING 192.168.112.114 (192.168.112.114) 56(84) bytes of data. 2025-09-19 12:13:04.086632 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=1 ttl=63 time=8.05 ms 2025-09-19 12:13:05.082326 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=2 ttl=63 time=2.10 ms 2025-09-19 12:13:06.084401 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=3 ttl=63 time=2.29 ms 2025-09-19 12:13:06.084508 | orchestrator | 2025-09-19 12:13:06.084524 | orchestrator | --- 192.168.112.114 ping statistics --- 2025-09-19 12:13:06.084537 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 12:13:06.084549 | orchestrator | rtt min/avg/max/mdev = 2.099/4.146/8.045/2.758 ms 2025-09-19 12:13:06.084994 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:13:06.085026 | orchestrator | + ping -c3 192.168.112.144 2025-09-19 12:13:06.097172 | orchestrator | PING 192.168.112.144 (192.168.112.144) 56(84) bytes of data. 2025-09-19 12:13:06.097214 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=1 ttl=63 time=7.94 ms 2025-09-19 12:13:07.094133 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=2 ttl=63 time=3.15 ms 2025-09-19 12:13:08.094196 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=3 ttl=63 time=1.85 ms 2025-09-19 12:13:08.094458 | orchestrator | 2025-09-19 12:13:08.094480 | orchestrator | --- 192.168.112.144 ping statistics --- 2025-09-19 12:13:08.094494 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 12:13:08.094505 | orchestrator | rtt min/avg/max/mdev = 1.854/4.315/7.942/2.618 ms 2025-09-19 12:13:08.094529 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:13:08.094542 | orchestrator | + ping -c3 192.168.112.109 2025-09-19 12:13:08.105028 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2025-09-19 12:13:08.105097 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=7.95 ms 2025-09-19 12:13:09.101816 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.52 ms 2025-09-19 12:13:10.103024 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=2.08 ms 2025-09-19 12:13:10.103182 | orchestrator | 2025-09-19 12:13:10.103212 | orchestrator | --- 192.168.112.109 ping statistics --- 2025-09-19 12:13:10.103233 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 12:13:10.103252 | orchestrator | rtt min/avg/max/mdev = 2.077/4.184/7.952/2.670 ms 2025-09-19 12:13:10.103635 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-09-19 12:13:13.378620 | orchestrator | 2025-09-19 12:13:13 | INFO  | Live migrating server d2008e6c-45ed-4452-a036-1b38898fc77d 2025-09-19 12:13:25.772350 | orchestrator | 2025-09-19 12:13:25 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:13:28.123059 | orchestrator | 2025-09-19 12:13:28 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:13:30.501693 | orchestrator | 2025-09-19 12:13:30 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:13:32.862585 | orchestrator | 2025-09-19 12:13:32 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:13:35.134732 | orchestrator | 2025-09-19 12:13:35 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:13:37.429883 | orchestrator | 2025-09-19 12:13:37 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:13:39.700493 | orchestrator | 2025-09-19 12:13:39 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:13:41.973097 | orchestrator | 2025-09-19 12:13:41 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) completed with status ACTIVE 2025-09-19 12:13:41.973245 | orchestrator | 2025-09-19 12:13:41 | INFO  | Live migrating server b01631eb-3495-4ca0-9e67-c6b346ed3f9c 2025-09-19 12:13:52.878300 | orchestrator | 2025-09-19 12:13:52 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:13:55.210872 | orchestrator | 2025-09-19 12:13:55 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:13:57.615559 | orchestrator | 2025-09-19 12:13:57 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:13:59.881478 | orchestrator | 2025-09-19 12:13:59 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:14:02.207634 | orchestrator | 2025-09-19 12:14:02 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:14:04.542849 | orchestrator | 2025-09-19 12:14:04 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:14:06.817804 | orchestrator | 2025-09-19 12:14:06 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:14:09.181746 | orchestrator | 2025-09-19 12:14:09 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) completed with status ACTIVE 2025-09-19 12:14:09.496390 | orchestrator | + compute_list 2025-09-19 12:14:09.496481 | orchestrator | + osism manage compute list testbed-node-3 2025-09-19 12:14:12.731827 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 12:14:12.731923 | orchestrator | | ID | Name | Status | 2025-09-19 12:14:12.731937 | orchestrator | |--------------------------------------+--------+----------| 2025-09-19 12:14:12.731949 | orchestrator | | d2008e6c-45ed-4452-a036-1b38898fc77d | test-4 | ACTIVE | 2025-09-19 12:14:12.731960 | orchestrator | | b40efd5b-242c-4482-a55b-036ebd5fd3d5 | test-3 | ACTIVE | 2025-09-19 12:14:12.731971 | orchestrator | | f3ef682c-28a5-44ff-9acf-895322eb6953 | test-2 | ACTIVE | 2025-09-19 12:14:12.731982 | orchestrator | | b01631eb-3495-4ca0-9e67-c6b346ed3f9c | test-1 | ACTIVE | 2025-09-19 12:14:12.731993 | orchestrator | | 25d1d02a-ced4-490b-b700-0cd2ce49984c | test | ACTIVE | 2025-09-19 12:14:12.732004 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 12:14:13.045671 | orchestrator | + osism manage compute list testbed-node-4 2025-09-19 12:14:15.820176 | orchestrator | +------+--------+----------+ 2025-09-19 12:14:15.820429 | orchestrator | | ID | Name | Status | 2025-09-19 12:14:15.820448 | orchestrator | |------+--------+----------| 2025-09-19 12:14:15.820460 | orchestrator | +------+--------+----------+ 2025-09-19 12:14:16.113603 | orchestrator | + osism manage compute list testbed-node-5 2025-09-19 12:14:18.891927 | orchestrator | +------+--------+----------+ 2025-09-19 12:14:18.892011 | orchestrator | | ID | Name | Status | 2025-09-19 12:14:18.892018 | orchestrator | |------+--------+----------| 2025-09-19 12:14:18.892022 | orchestrator | +------+--------+----------+ 2025-09-19 12:14:19.226092 | orchestrator | + server_ping 2025-09-19 12:14:19.227477 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-19 12:14:19.227521 | orchestrator | ++ tr -d '\r' 2025-09-19 12:14:22.167089 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:14:22.167188 | orchestrator | + ping -c3 192.168.112.179 2025-09-19 12:14:22.181763 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2025-09-19 12:14:22.181823 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=10.5 ms 2025-09-19 12:14:23.175623 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.27 ms 2025-09-19 12:14:24.179130 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=2.35 ms 2025-09-19 12:14:24.179264 | orchestrator | 2025-09-19 12:14:24.179320 | orchestrator | --- 192.168.112.179 ping statistics --- 2025-09-19 12:14:24.179334 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-19 12:14:24.179346 | orchestrator | rtt min/avg/max/mdev = 2.274/5.050/10.525/3.871 ms 2025-09-19 12:14:24.179857 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:14:24.180086 | orchestrator | + ping -c3 192.168.112.117 2025-09-19 12:14:24.189321 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-09-19 12:14:24.189382 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=6.28 ms 2025-09-19 12:14:25.186626 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=1.94 ms 2025-09-19 12:14:26.188416 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=2.09 ms 2025-09-19 12:14:26.188521 | orchestrator | 2025-09-19 12:14:26.188536 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-09-19 12:14:26.188549 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-19 12:14:26.188560 | orchestrator | rtt min/avg/max/mdev = 1.935/3.434/6.281/2.013 ms 2025-09-19 12:14:26.188784 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:14:26.188818 | orchestrator | + ping -c3 192.168.112.114 2025-09-19 12:14:26.200974 | orchestrator | PING 192.168.112.114 (192.168.112.114) 56(84) bytes of data. 2025-09-19 12:14:26.201030 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=1 ttl=63 time=7.22 ms 2025-09-19 12:14:27.197667 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=2 ttl=63 time=2.42 ms 2025-09-19 12:14:28.198557 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=3 ttl=63 time=1.98 ms 2025-09-19 12:14:28.198619 | orchestrator | 2025-09-19 12:14:28.198631 | orchestrator | --- 192.168.112.114 ping statistics --- 2025-09-19 12:14:28.198640 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-19 12:14:28.198648 | orchestrator | rtt min/avg/max/mdev = 1.984/3.876/7.220/2.371 ms 2025-09-19 12:14:28.198926 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:14:28.198944 | orchestrator | + ping -c3 192.168.112.144 2025-09-19 12:14:28.213610 | orchestrator | PING 192.168.112.144 (192.168.112.144) 56(84) bytes of data. 2025-09-19 12:14:28.213667 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=1 ttl=63 time=10.3 ms 2025-09-19 12:14:29.208591 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=2 ttl=63 time=2.73 ms 2025-09-19 12:14:30.208766 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=3 ttl=63 time=1.67 ms 2025-09-19 12:14:30.208876 | orchestrator | 2025-09-19 12:14:30.208903 | orchestrator | --- 192.168.112.144 ping statistics --- 2025-09-19 12:14:30.208921 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 12:14:30.209000 | orchestrator | rtt min/avg/max/mdev = 1.670/4.891/10.273/3.830 ms 2025-09-19 12:14:30.210054 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:14:30.210080 | orchestrator | + ping -c3 192.168.112.109 2025-09-19 12:14:30.223619 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2025-09-19 12:14:30.223696 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=7.73 ms 2025-09-19 12:14:31.218992 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.31 ms 2025-09-19 12:14:32.221014 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=2.43 ms 2025-09-19 12:14:32.221196 | orchestrator | 2025-09-19 12:14:32.221214 | orchestrator | --- 192.168.112.109 ping statistics --- 2025-09-19 12:14:32.221227 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-19 12:14:32.221238 | orchestrator | rtt min/avg/max/mdev = 2.310/4.156/7.730/2.527 ms 2025-09-19 12:14:32.221770 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-09-19 12:14:35.391804 | orchestrator | 2025-09-19 12:14:35 | INFO  | Live migrating server d2008e6c-45ed-4452-a036-1b38898fc77d 2025-09-19 12:14:47.110937 | orchestrator | 2025-09-19 12:14:47 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:14:49.522744 | orchestrator | 2025-09-19 12:14:49 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:14:51.854200 | orchestrator | 2025-09-19 12:14:51 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:14:54.250807 | orchestrator | 2025-09-19 12:14:54 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:14:56.520224 | orchestrator | 2025-09-19 12:14:56 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:14:58.801472 | orchestrator | 2025-09-19 12:14:58 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:15:01.102134 | orchestrator | 2025-09-19 12:15:01 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:15:03.489157 | orchestrator | 2025-09-19 12:15:03 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) completed with status ACTIVE 2025-09-19 12:15:03.489258 | orchestrator | 2025-09-19 12:15:03 | INFO  | Live migrating server b40efd5b-242c-4482-a55b-036ebd5fd3d5 2025-09-19 12:15:14.501618 | orchestrator | 2025-09-19 12:15:14 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:15:16.858954 | orchestrator | 2025-09-19 12:15:16 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:15:19.233517 | orchestrator | 2025-09-19 12:15:19 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:15:21.507905 | orchestrator | 2025-09-19 12:15:21 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:15:23.801584 | orchestrator | 2025-09-19 12:15:23 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:15:26.215593 | orchestrator | 2025-09-19 12:15:26 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:15:28.569009 | orchestrator | 2025-09-19 12:15:28 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:15:30.916989 | orchestrator | 2025-09-19 12:15:30 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) completed with status ACTIVE 2025-09-19 12:15:30.917095 | orchestrator | 2025-09-19 12:15:30 | INFO  | Live migrating server f3ef682c-28a5-44ff-9acf-895322eb6953 2025-09-19 12:15:42.610000 | orchestrator | 2025-09-19 12:15:42 | INFO  | Live migration of f3ef682c-28a5-44ff-9acf-895322eb6953 (test-2) is still in progress 2025-09-19 12:15:44.969222 | orchestrator | 2025-09-19 12:15:44 | INFO  | Live migration of f3ef682c-28a5-44ff-9acf-895322eb6953 (test-2) is still in progress 2025-09-19 12:15:47.298776 | orchestrator | 2025-09-19 12:15:47 | INFO  | Live migration of f3ef682c-28a5-44ff-9acf-895322eb6953 (test-2) is still in progress 2025-09-19 12:15:49.580055 | orchestrator | 2025-09-19 12:15:49 | INFO  | Live migration of f3ef682c-28a5-44ff-9acf-895322eb6953 (test-2) is still in progress 2025-09-19 12:15:51.874849 | orchestrator | 2025-09-19 12:15:51 | INFO  | Live migration of f3ef682c-28a5-44ff-9acf-895322eb6953 (test-2) is still in progress 2025-09-19 12:15:54.233348 | orchestrator | 2025-09-19 12:15:54 | INFO  | Live migration of f3ef682c-28a5-44ff-9acf-895322eb6953 (test-2) is still in progress 2025-09-19 12:15:56.502951 | orchestrator | 2025-09-19 12:15:56 | INFO  | Live migration of f3ef682c-28a5-44ff-9acf-895322eb6953 (test-2) completed with status ACTIVE 2025-09-19 12:15:56.503077 | orchestrator | 2025-09-19 12:15:56 | INFO  | Live migrating server b01631eb-3495-4ca0-9e67-c6b346ed3f9c 2025-09-19 12:16:06.408320 | orchestrator | 2025-09-19 12:16:06 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:16:08.746873 | orchestrator | 2025-09-19 12:16:08 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:16:11.131073 | orchestrator | 2025-09-19 12:16:11 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:16:13.421166 | orchestrator | 2025-09-19 12:16:13 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:16:15.783685 | orchestrator | 2025-09-19 12:16:15 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:16:18.147470 | orchestrator | 2025-09-19 12:16:18 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:16:20.441546 | orchestrator | 2025-09-19 12:16:20 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:16:22.828562 | orchestrator | 2025-09-19 12:16:22 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) completed with status ACTIVE 2025-09-19 12:16:22.828684 | orchestrator | 2025-09-19 12:16:22 | INFO  | Live migrating server 25d1d02a-ced4-490b-b700-0cd2ce49984c 2025-09-19 12:16:32.532430 | orchestrator | 2025-09-19 12:16:32 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:16:34.926207 | orchestrator | 2025-09-19 12:16:34 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:16:37.286219 | orchestrator | 2025-09-19 12:16:37 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:16:39.688090 | orchestrator | 2025-09-19 12:16:39 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:16:42.247900 | orchestrator | 2025-09-19 12:16:42 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:16:44.547095 | orchestrator | 2025-09-19 12:16:44 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:16:46.895706 | orchestrator | 2025-09-19 12:16:46 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:16:49.182941 | orchestrator | 2025-09-19 12:16:49 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:16:51.682577 | orchestrator | 2025-09-19 12:16:51 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) completed with status ACTIVE 2025-09-19 12:16:52.016480 | orchestrator | + compute_list 2025-09-19 12:16:52.016576 | orchestrator | + osism manage compute list testbed-node-3 2025-09-19 12:16:54.820152 | orchestrator | +------+--------+----------+ 2025-09-19 12:16:54.820262 | orchestrator | | ID | Name | Status | 2025-09-19 12:16:54.820277 | orchestrator | |------+--------+----------| 2025-09-19 12:16:54.820290 | orchestrator | +------+--------+----------+ 2025-09-19 12:16:55.202285 | orchestrator | + osism manage compute list testbed-node-4 2025-09-19 12:16:58.428541 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 12:16:58.428733 | orchestrator | | ID | Name | Status | 2025-09-19 12:16:58.428754 | orchestrator | |--------------------------------------+--------+----------| 2025-09-19 12:16:58.428765 | orchestrator | | d2008e6c-45ed-4452-a036-1b38898fc77d | test-4 | ACTIVE | 2025-09-19 12:16:58.428776 | orchestrator | | b40efd5b-242c-4482-a55b-036ebd5fd3d5 | test-3 | ACTIVE | 2025-09-19 12:16:58.428809 | orchestrator | | f3ef682c-28a5-44ff-9acf-895322eb6953 | test-2 | ACTIVE | 2025-09-19 12:16:58.428821 | orchestrator | | b01631eb-3495-4ca0-9e67-c6b346ed3f9c | test-1 | ACTIVE | 2025-09-19 12:16:58.428832 | orchestrator | | 25d1d02a-ced4-490b-b700-0cd2ce49984c | test | ACTIVE | 2025-09-19 12:16:58.428843 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 12:16:58.834777 | orchestrator | + osism manage compute list testbed-node-5 2025-09-19 12:17:01.689756 | orchestrator | +------+--------+----------+ 2025-09-19 12:17:01.689844 | orchestrator | | ID | Name | Status | 2025-09-19 12:17:01.689854 | orchestrator | |------+--------+----------| 2025-09-19 12:17:01.689862 | orchestrator | +------+--------+----------+ 2025-09-19 12:17:02.033402 | orchestrator | + server_ping 2025-09-19 12:17:02.035184 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-19 12:17:02.037013 | orchestrator | ++ tr -d '\r' 2025-09-19 12:17:05.327623 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:17:05.327749 | orchestrator | + ping -c3 192.168.112.179 2025-09-19 12:17:05.337969 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2025-09-19 12:17:05.338056 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=6.90 ms 2025-09-19 12:17:06.334785 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=1.96 ms 2025-09-19 12:17:07.336951 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=2.07 ms 2025-09-19 12:17:07.337043 | orchestrator | 2025-09-19 12:17:07.337058 | orchestrator | --- 192.168.112.179 ping statistics --- 2025-09-19 12:17:07.337071 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 12:17:07.337083 | orchestrator | rtt min/avg/max/mdev = 1.959/3.643/6.899/2.302 ms 2025-09-19 12:17:07.337094 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:17:07.337106 | orchestrator | + ping -c3 192.168.112.117 2025-09-19 12:17:07.348475 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-09-19 12:17:07.348510 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=6.17 ms 2025-09-19 12:17:08.346748 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.49 ms 2025-09-19 12:17:09.348006 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=2.09 ms 2025-09-19 12:17:09.348214 | orchestrator | 2025-09-19 12:17:09.348332 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-09-19 12:17:09.348346 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 12:17:09.348357 | orchestrator | rtt min/avg/max/mdev = 2.092/3.582/6.166/1.834 ms 2025-09-19 12:17:09.348380 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:17:09.348392 | orchestrator | + ping -c3 192.168.112.114 2025-09-19 12:17:09.361826 | orchestrator | PING 192.168.112.114 (192.168.112.114) 56(84) bytes of data. 2025-09-19 12:17:09.361891 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=1 ttl=63 time=8.75 ms 2025-09-19 12:17:10.357520 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=2 ttl=63 time=2.28 ms 2025-09-19 12:17:11.359337 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=3 ttl=63 time=2.37 ms 2025-09-19 12:17:11.359439 | orchestrator | 2025-09-19 12:17:11.359456 | orchestrator | --- 192.168.112.114 ping statistics --- 2025-09-19 12:17:11.359469 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-19 12:17:11.359480 | orchestrator | rtt min/avg/max/mdev = 2.275/4.466/8.753/3.031 ms 2025-09-19 12:17:11.360018 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:17:11.360043 | orchestrator | + ping -c3 192.168.112.144 2025-09-19 12:17:11.375026 | orchestrator | PING 192.168.112.144 (192.168.112.144) 56(84) bytes of data. 2025-09-19 12:17:11.375049 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=1 ttl=63 time=9.58 ms 2025-09-19 12:17:12.372853 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=2 ttl=63 time=5.27 ms 2025-09-19 12:17:13.371157 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=3 ttl=63 time=2.00 ms 2025-09-19 12:17:13.371256 | orchestrator | 2025-09-19 12:17:13.371271 | orchestrator | --- 192.168.112.144 ping statistics --- 2025-09-19 12:17:13.371311 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 12:17:13.371322 | orchestrator | rtt min/avg/max/mdev = 1.995/5.614/9.576/3.104 ms 2025-09-19 12:17:13.371670 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:17:13.371714 | orchestrator | + ping -c3 192.168.112.109 2025-09-19 12:17:13.384774 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2025-09-19 12:17:13.384814 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=6.93 ms 2025-09-19 12:17:14.382278 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.97 ms 2025-09-19 12:17:15.382259 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.70 ms 2025-09-19 12:17:15.382367 | orchestrator | 2025-09-19 12:17:15.382381 | orchestrator | --- 192.168.112.109 ping statistics --- 2025-09-19 12:17:15.382392 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-19 12:17:15.382402 | orchestrator | rtt min/avg/max/mdev = 1.695/3.865/6.929/2.228 ms 2025-09-19 12:17:15.382468 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-09-19 12:17:18.747884 | orchestrator | 2025-09-19 12:17:18 | INFO  | Live migrating server d2008e6c-45ed-4452-a036-1b38898fc77d 2025-09-19 12:17:29.903898 | orchestrator | 2025-09-19 12:17:29 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:17:32.258833 | orchestrator | 2025-09-19 12:17:32 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:17:34.582618 | orchestrator | 2025-09-19 12:17:34 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:17:36.837248 | orchestrator | 2025-09-19 12:17:36 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:17:39.106950 | orchestrator | 2025-09-19 12:17:39 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:17:41.606397 | orchestrator | 2025-09-19 12:17:41 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) is still in progress 2025-09-19 12:17:43.872858 | orchestrator | 2025-09-19 12:17:43 | INFO  | Live migration of d2008e6c-45ed-4452-a036-1b38898fc77d (test-4) completed with status ACTIVE 2025-09-19 12:17:43.872969 | orchestrator | 2025-09-19 12:17:43 | INFO  | Live migrating server b40efd5b-242c-4482-a55b-036ebd5fd3d5 2025-09-19 12:17:53.724941 | orchestrator | 2025-09-19 12:17:53 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:17:56.104187 | orchestrator | 2025-09-19 12:17:56 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:17:58.472523 | orchestrator | 2025-09-19 12:17:58 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:18:00.745170 | orchestrator | 2025-09-19 12:18:00 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:18:03.098537 | orchestrator | 2025-09-19 12:18:03 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:18:05.428636 | orchestrator | 2025-09-19 12:18:05 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:18:07.785364 | orchestrator | 2025-09-19 12:18:07 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) is still in progress 2025-09-19 12:18:10.136039 | orchestrator | 2025-09-19 12:18:10 | INFO  | Live migration of b40efd5b-242c-4482-a55b-036ebd5fd3d5 (test-3) completed with status ACTIVE 2025-09-19 12:18:10.136161 | orchestrator | 2025-09-19 12:18:10 | INFO  | Live migrating server f3ef682c-28a5-44ff-9acf-895322eb6953 2025-09-19 12:18:20.331633 | orchestrator | 2025-09-19 12:18:20 | INFO  | Live migration of f3ef682c-28a5-44ff-9acf-895322eb6953 (test-2) is still in progress 2025-09-19 12:18:22.684643 | orchestrator | 2025-09-19 12:18:22 | INFO  | Live migration of f3ef682c-28a5-44ff-9acf-895322eb6953 (test-2) is still in progress 2025-09-19 12:18:25.056604 | orchestrator | 2025-09-19 12:18:25 | INFO  | Live migration of f3ef682c-28a5-44ff-9acf-895322eb6953 (test-2) is still in progress 2025-09-19 12:18:27.384115 | orchestrator | 2025-09-19 12:18:27 | INFO  | Live migration of f3ef682c-28a5-44ff-9acf-895322eb6953 (test-2) is still in progress 2025-09-19 12:18:29.759459 | orchestrator | 2025-09-19 12:18:29 | INFO  | Live migration of f3ef682c-28a5-44ff-9acf-895322eb6953 (test-2) is still in progress 2025-09-19 12:18:32.089933 | orchestrator | 2025-09-19 12:18:32 | INFO  | Live migration of f3ef682c-28a5-44ff-9acf-895322eb6953 (test-2) is still in progress 2025-09-19 12:18:34.466009 | orchestrator | 2025-09-19 12:18:34 | INFO  | Live migration of f3ef682c-28a5-44ff-9acf-895322eb6953 (test-2) completed with status ACTIVE 2025-09-19 12:18:34.466176 | orchestrator | 2025-09-19 12:18:34 | INFO  | Live migrating server b01631eb-3495-4ca0-9e67-c6b346ed3f9c 2025-09-19 12:18:44.684644 | orchestrator | 2025-09-19 12:18:44 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:18:47.233592 | orchestrator | 2025-09-19 12:18:47 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:18:49.604037 | orchestrator | 2025-09-19 12:18:49 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:18:51.890857 | orchestrator | 2025-09-19 12:18:51 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:18:54.179052 | orchestrator | 2025-09-19 12:18:54 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:18:56.495257 | orchestrator | 2025-09-19 12:18:56 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:18:58.840604 | orchestrator | 2025-09-19 12:18:58 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) is still in progress 2025-09-19 12:19:01.139518 | orchestrator | 2025-09-19 12:19:01 | INFO  | Live migration of b01631eb-3495-4ca0-9e67-c6b346ed3f9c (test-1) completed with status ACTIVE 2025-09-19 12:19:01.139621 | orchestrator | 2025-09-19 12:19:01 | INFO  | Live migrating server 25d1d02a-ced4-490b-b700-0cd2ce49984c 2025-09-19 12:19:12.383900 | orchestrator | 2025-09-19 12:19:12 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:19:14.757505 | orchestrator | 2025-09-19 12:19:14 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:19:17.146378 | orchestrator | 2025-09-19 12:19:17 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:19:19.525972 | orchestrator | 2025-09-19 12:19:19 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:19:21.805569 | orchestrator | 2025-09-19 12:19:21 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:19:24.100498 | orchestrator | 2025-09-19 12:19:24 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:19:26.398187 | orchestrator | 2025-09-19 12:19:26 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:19:28.687137 | orchestrator | 2025-09-19 12:19:28 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:19:31.010826 | orchestrator | 2025-09-19 12:19:31 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) is still in progress 2025-09-19 12:19:33.291125 | orchestrator | 2025-09-19 12:19:33 | INFO  | Live migration of 25d1d02a-ced4-490b-b700-0cd2ce49984c (test) completed with status ACTIVE 2025-09-19 12:19:33.605903 | orchestrator | + compute_list 2025-09-19 12:19:33.605998 | orchestrator | + osism manage compute list testbed-node-3 2025-09-19 12:19:36.383685 | orchestrator | +------+--------+----------+ 2025-09-19 12:19:36.383816 | orchestrator | | ID | Name | Status | 2025-09-19 12:19:36.383830 | orchestrator | |------+--------+----------| 2025-09-19 12:19:36.383839 | orchestrator | +------+--------+----------+ 2025-09-19 12:19:36.672343 | orchestrator | + osism manage compute list testbed-node-4 2025-09-19 12:19:39.455084 | orchestrator | +------+--------+----------+ 2025-09-19 12:19:39.455224 | orchestrator | | ID | Name | Status | 2025-09-19 12:19:39.455250 | orchestrator | |------+--------+----------| 2025-09-19 12:19:39.455271 | orchestrator | +------+--------+----------+ 2025-09-19 12:19:39.757881 | orchestrator | + osism manage compute list testbed-node-5 2025-09-19 12:19:43.048021 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 12:19:43.048147 | orchestrator | | ID | Name | Status | 2025-09-19 12:19:43.048172 | orchestrator | |--------------------------------------+--------+----------| 2025-09-19 12:19:43.048191 | orchestrator | | d2008e6c-45ed-4452-a036-1b38898fc77d | test-4 | ACTIVE | 2025-09-19 12:19:43.048208 | orchestrator | | b40efd5b-242c-4482-a55b-036ebd5fd3d5 | test-3 | ACTIVE | 2025-09-19 12:19:43.048224 | orchestrator | | f3ef682c-28a5-44ff-9acf-895322eb6953 | test-2 | ACTIVE | 2025-09-19 12:19:43.048241 | orchestrator | | b01631eb-3495-4ca0-9e67-c6b346ed3f9c | test-1 | ACTIVE | 2025-09-19 12:19:43.048259 | orchestrator | | 25d1d02a-ced4-490b-b700-0cd2ce49984c | test | ACTIVE | 2025-09-19 12:19:43.048277 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 12:19:43.403001 | orchestrator | + server_ping 2025-09-19 12:19:43.405107 | orchestrator | ++ tr -d '\r' 2025-09-19 12:19:43.405142 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-19 12:19:46.354576 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:19:46.354695 | orchestrator | + ping -c3 192.168.112.179 2025-09-19 12:19:46.365557 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2025-09-19 12:19:46.365631 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=8.90 ms 2025-09-19 12:19:47.360071 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.35 ms 2025-09-19 12:19:48.361357 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.87 ms 2025-09-19 12:19:48.361457 | orchestrator | 2025-09-19 12:19:48.361473 | orchestrator | --- 192.168.112.179 ping statistics --- 2025-09-19 12:19:48.361486 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 12:19:48.361497 | orchestrator | rtt min/avg/max/mdev = 1.865/4.371/8.902/3.209 ms 2025-09-19 12:19:48.361983 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:19:48.362047 | orchestrator | + ping -c3 192.168.112.117 2025-09-19 12:19:48.373055 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2025-09-19 12:19:48.373117 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=5.90 ms 2025-09-19 12:19:49.371002 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=3.02 ms 2025-09-19 12:19:50.371351 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=2.09 ms 2025-09-19 12:19:50.371453 | orchestrator | 2025-09-19 12:19:50.371468 | orchestrator | --- 192.168.112.117 ping statistics --- 2025-09-19 12:19:50.371481 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-19 12:19:50.371492 | orchestrator | rtt min/avg/max/mdev = 2.086/3.667/5.899/1.623 ms 2025-09-19 12:19:50.372189 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:19:50.372292 | orchestrator | + ping -c3 192.168.112.114 2025-09-19 12:19:50.385721 | orchestrator | PING 192.168.112.114 (192.168.112.114) 56(84) bytes of data. 2025-09-19 12:19:50.385771 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=1 ttl=63 time=8.65 ms 2025-09-19 12:19:51.382589 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=2 ttl=63 time=3.02 ms 2025-09-19 12:19:52.383330 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=3 ttl=63 time=2.21 ms 2025-09-19 12:19:52.383433 | orchestrator | 2025-09-19 12:19:52.383450 | orchestrator | --- 192.168.112.114 ping statistics --- 2025-09-19 12:19:52.383463 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 12:19:52.383475 | orchestrator | rtt min/avg/max/mdev = 2.213/4.630/8.654/2.864 ms 2025-09-19 12:19:52.384165 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:19:52.384193 | orchestrator | + ping -c3 192.168.112.144 2025-09-19 12:19:52.397908 | orchestrator | PING 192.168.112.144 (192.168.112.144) 56(84) bytes of data. 2025-09-19 12:19:52.397991 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=1 ttl=63 time=9.38 ms 2025-09-19 12:19:53.392933 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=2 ttl=63 time=2.33 ms 2025-09-19 12:19:54.394759 | orchestrator | 64 bytes from 192.168.112.144: icmp_seq=3 ttl=63 time=2.29 ms 2025-09-19 12:19:54.394917 | orchestrator | 2025-09-19 12:19:54.394934 | orchestrator | --- 192.168.112.144 ping statistics --- 2025-09-19 12:19:54.394946 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 12:19:54.394958 | orchestrator | rtt min/avg/max/mdev = 2.291/4.666/9.381/3.333 ms 2025-09-19 12:19:54.395705 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 12:19:54.395730 | orchestrator | + ping -c3 192.168.112.109 2025-09-19 12:19:54.408099 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2025-09-19 12:19:54.408148 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=7.01 ms 2025-09-19 12:19:55.405220 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.92 ms 2025-09-19 12:19:56.405938 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=2.24 ms 2025-09-19 12:19:56.406087 | orchestrator | 2025-09-19 12:19:56.406107 | orchestrator | --- 192.168.112.109 ping statistics --- 2025-09-19 12:19:56.406119 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-19 12:19:56.406131 | orchestrator | rtt min/avg/max/mdev = 2.237/4.057/7.014/2.109 ms 2025-09-19 12:19:56.885408 | orchestrator | ok: Runtime: 0:19:00.538297 2025-09-19 12:19:56.945045 | 2025-09-19 12:19:56.945188 | TASK [Run tempest] 2025-09-19 12:19:57.480290 | orchestrator | skipping: Conditional result was False 2025-09-19 12:19:57.498733 | 2025-09-19 12:19:57.498948 | TASK [Check prometheus alert status] 2025-09-19 12:19:58.037310 | orchestrator | skipping: Conditional result was False 2025-09-19 12:19:58.042239 | 2025-09-19 12:19:58.042436 | PLAY RECAP 2025-09-19 12:19:58.042586 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-09-19 12:19:58.042661 | 2025-09-19 12:19:58.262424 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-19 12:19:58.263535 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-19 12:19:58.996751 | 2025-09-19 12:19:58.996909 | PLAY [Post output play] 2025-09-19 12:19:59.012788 | 2025-09-19 12:19:59.012923 | LOOP [stage-output : Register sources] 2025-09-19 12:19:59.084058 | 2025-09-19 12:19:59.084403 | TASK [stage-output : Check sudo] 2025-09-19 12:19:59.905034 | orchestrator | sudo: a password is required 2025-09-19 12:20:00.123542 | orchestrator | ok: Runtime: 0:00:00.009935 2025-09-19 12:20:00.137194 | 2025-09-19 12:20:00.137347 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-19 12:20:00.173349 | 2025-09-19 12:20:00.173611 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-19 12:20:00.252325 | orchestrator | ok 2025-09-19 12:20:00.261483 | 2025-09-19 12:20:00.261627 | LOOP [stage-output : Ensure target folders exist] 2025-09-19 12:20:00.687227 | orchestrator | ok: "docs" 2025-09-19 12:20:00.687559 | 2025-09-19 12:20:00.934686 | orchestrator | ok: "artifacts" 2025-09-19 12:20:01.182806 | orchestrator | ok: "logs" 2025-09-19 12:20:01.204495 | 2025-09-19 12:20:01.204671 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-19 12:20:01.243233 | 2025-09-19 12:20:01.243509 | TASK [stage-output : Make all log files readable] 2025-09-19 12:20:01.526708 | orchestrator | ok 2025-09-19 12:20:01.535889 | 2025-09-19 12:20:01.536076 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-19 12:20:01.570605 | orchestrator | skipping: Conditional result was False 2025-09-19 12:20:01.585722 | 2025-09-19 12:20:01.585858 | TASK [stage-output : Discover log files for compression] 2025-09-19 12:20:01.609712 | orchestrator | skipping: Conditional result was False 2025-09-19 12:20:01.619481 | 2025-09-19 12:20:01.619599 | LOOP [stage-output : Archive everything from logs] 2025-09-19 12:20:01.657161 | 2025-09-19 12:20:01.657312 | PLAY [Post cleanup play] 2025-09-19 12:20:01.665029 | 2025-09-19 12:20:01.665151 | TASK [Set cloud fact (Zuul deployment)] 2025-09-19 12:20:01.718804 | orchestrator | ok 2025-09-19 12:20:01.728967 | 2025-09-19 12:20:01.729113 | TASK [Set cloud fact (local deployment)] 2025-09-19 12:20:01.752238 | orchestrator | skipping: Conditional result was False 2025-09-19 12:20:01.763263 | 2025-09-19 12:20:01.763383 | TASK [Clean the cloud environment] 2025-09-19 12:20:02.923018 | orchestrator | 2025-09-19 12:20:02 - clean up servers 2025-09-19 12:20:03.648411 | orchestrator | 2025-09-19 12:20:03 - testbed-manager 2025-09-19 12:20:03.732411 | orchestrator | 2025-09-19 12:20:03 - testbed-node-2 2025-09-19 12:20:03.817727 | orchestrator | 2025-09-19 12:20:03 - testbed-node-3 2025-09-19 12:20:03.905189 | orchestrator | 2025-09-19 12:20:03 - testbed-node-0 2025-09-19 12:20:03.993761 | orchestrator | 2025-09-19 12:20:03 - testbed-node-5 2025-09-19 12:20:04.088258 | orchestrator | 2025-09-19 12:20:04 - testbed-node-4 2025-09-19 12:20:04.184970 | orchestrator | 2025-09-19 12:20:04 - testbed-node-1 2025-09-19 12:20:04.270059 | orchestrator | 2025-09-19 12:20:04 - clean up keypairs 2025-09-19 12:20:04.290669 | orchestrator | 2025-09-19 12:20:04 - testbed 2025-09-19 12:20:04.317384 | orchestrator | 2025-09-19 12:20:04 - wait for servers to be gone 2025-09-19 12:20:13.030742 | orchestrator | 2025-09-19 12:20:13 - clean up ports 2025-09-19 12:20:13.236448 | orchestrator | 2025-09-19 12:20:13 - 2eb10a18-0d66-4452-94be-514006e2d3f8 2025-09-19 12:20:13.505282 | orchestrator | 2025-09-19 12:20:13 - 6989f155-dc57-45ba-ad86-feb93a4805e6 2025-09-19 12:20:13.792500 | orchestrator | 2025-09-19 12:20:13 - 982c5818-6d08-4148-a5ca-bc557bdbd626 2025-09-19 12:20:14.001728 | orchestrator | 2025-09-19 12:20:14 - d471fe81-502d-4a82-8210-c9f283e8c6e3 2025-09-19 12:20:14.407877 | orchestrator | 2025-09-19 12:20:14 - d7d53295-7c1f-4a85-85ce-0e77df4f6b3a 2025-09-19 12:20:14.653622 | orchestrator | 2025-09-19 12:20:14 - e998afa3-3724-4a21-8cac-a69b13a65e05 2025-09-19 12:20:14.861879 | orchestrator | 2025-09-19 12:20:14 - ed561693-eaa4-418e-8bd8-ad83bd284631 2025-09-19 12:20:15.083595 | orchestrator | 2025-09-19 12:20:15 - clean up volumes 2025-09-19 12:20:15.190892 | orchestrator | 2025-09-19 12:20:15 - testbed-volume-3-node-base 2025-09-19 12:20:15.236098 | orchestrator | 2025-09-19 12:20:15 - testbed-volume-manager-base 2025-09-19 12:20:15.280210 | orchestrator | 2025-09-19 12:20:15 - testbed-volume-2-node-base 2025-09-19 12:20:15.319732 | orchestrator | 2025-09-19 12:20:15 - testbed-volume-4-node-base 2025-09-19 12:20:15.363184 | orchestrator | 2025-09-19 12:20:15 - testbed-volume-0-node-base 2025-09-19 12:20:15.403697 | orchestrator | 2025-09-19 12:20:15 - testbed-volume-5-node-base 2025-09-19 12:20:15.441797 | orchestrator | 2025-09-19 12:20:15 - testbed-volume-1-node-base 2025-09-19 12:20:15.479480 | orchestrator | 2025-09-19 12:20:15 - testbed-volume-4-node-4 2025-09-19 12:20:15.520771 | orchestrator | 2025-09-19 12:20:15 - testbed-volume-6-node-3 2025-09-19 12:20:15.562514 | orchestrator | 2025-09-19 12:20:15 - testbed-volume-3-node-3 2025-09-19 12:20:15.610403 | orchestrator | 2025-09-19 12:20:15 - testbed-volume-0-node-3 2025-09-19 12:20:15.654857 | orchestrator | 2025-09-19 12:20:15 - testbed-volume-7-node-4 2025-09-19 12:20:15.696368 | orchestrator | 2025-09-19 12:20:15 - testbed-volume-1-node-4 2025-09-19 12:20:15.738423 | orchestrator | 2025-09-19 12:20:15 - testbed-volume-2-node-5 2025-09-19 12:20:15.778139 | orchestrator | 2025-09-19 12:20:15 - testbed-volume-5-node-5 2025-09-19 12:20:15.820596 | orchestrator | 2025-09-19 12:20:15 - testbed-volume-8-node-5 2025-09-19 12:20:15.861646 | orchestrator | 2025-09-19 12:20:15 - disconnect routers 2025-09-19 12:20:15.979375 | orchestrator | 2025-09-19 12:20:15 - testbed 2025-09-19 12:20:16.953198 | orchestrator | 2025-09-19 12:20:16 - clean up subnets 2025-09-19 12:20:17.007416 | orchestrator | 2025-09-19 12:20:17 - subnet-testbed-management 2025-09-19 12:20:17.167111 | orchestrator | 2025-09-19 12:20:17 - clean up networks 2025-09-19 12:20:17.855987 | orchestrator | 2025-09-19 12:20:17 - net-testbed-management 2025-09-19 12:20:18.137792 | orchestrator | 2025-09-19 12:20:18 - clean up security groups 2025-09-19 12:20:18.188578 | orchestrator | 2025-09-19 12:20:18 - testbed-node 2025-09-19 12:20:18.307516 | orchestrator | 2025-09-19 12:20:18 - testbed-management 2025-09-19 12:20:18.423150 | orchestrator | 2025-09-19 12:20:18 - clean up floating ips 2025-09-19 12:20:18.458813 | orchestrator | 2025-09-19 12:20:18 - 81.163.192.121 2025-09-19 12:20:18.807040 | orchestrator | 2025-09-19 12:20:18 - clean up routers 2025-09-19 12:20:18.924270 | orchestrator | 2025-09-19 12:20:18 - testbed 2025-09-19 12:20:19.815728 | orchestrator | ok: Runtime: 0:00:17.687782 2025-09-19 12:20:19.819892 | 2025-09-19 12:20:19.820072 | PLAY RECAP 2025-09-19 12:20:19.820208 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-19 12:20:19.820271 | 2025-09-19 12:20:19.949947 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-19 12:20:19.952394 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-19 12:20:20.669151 | 2025-09-19 12:20:20.669357 | PLAY [Cleanup play] 2025-09-19 12:20:20.687610 | 2025-09-19 12:20:20.687743 | TASK [Set cloud fact (Zuul deployment)] 2025-09-19 12:20:20.747902 | orchestrator | ok 2025-09-19 12:20:20.760445 | 2025-09-19 12:20:20.760639 | TASK [Set cloud fact (local deployment)] 2025-09-19 12:20:20.785419 | orchestrator | skipping: Conditional result was False 2025-09-19 12:20:20.804226 | 2025-09-19 12:20:20.804403 | TASK [Clean the cloud environment] 2025-09-19 12:20:21.922586 | orchestrator | 2025-09-19 12:20:21 - clean up servers 2025-09-19 12:20:22.421501 | orchestrator | 2025-09-19 12:20:22 - clean up keypairs 2025-09-19 12:20:22.438622 | orchestrator | 2025-09-19 12:20:22 - wait for servers to be gone 2025-09-19 12:20:22.474522 | orchestrator | 2025-09-19 12:20:22 - clean up ports 2025-09-19 12:20:22.549598 | orchestrator | 2025-09-19 12:20:22 - clean up volumes 2025-09-19 12:20:22.625568 | orchestrator | 2025-09-19 12:20:22 - disconnect routers 2025-09-19 12:20:22.658177 | orchestrator | 2025-09-19 12:20:22 - clean up subnets 2025-09-19 12:20:22.676340 | orchestrator | 2025-09-19 12:20:22 - clean up networks 2025-09-19 12:20:22.842676 | orchestrator | 2025-09-19 12:20:22 - clean up security groups 2025-09-19 12:20:22.881623 | orchestrator | 2025-09-19 12:20:22 - clean up floating ips 2025-09-19 12:20:22.904197 | orchestrator | 2025-09-19 12:20:22 - clean up routers 2025-09-19 12:20:23.352702 | orchestrator | ok: Runtime: 0:00:01.378865 2025-09-19 12:20:23.356516 | 2025-09-19 12:20:23.356678 | PLAY RECAP 2025-09-19 12:20:23.356798 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-19 12:20:23.356863 | 2025-09-19 12:20:23.479958 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-19 12:20:23.482624 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-19 12:20:24.214489 | 2025-09-19 12:20:24.214638 | PLAY [Base post-fetch] 2025-09-19 12:20:24.229791 | 2025-09-19 12:20:24.229924 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-19 12:20:24.282233 | orchestrator | skipping: Conditional result was False 2025-09-19 12:20:24.297225 | 2025-09-19 12:20:24.297422 | TASK [fetch-output : Set log path for single node] 2025-09-19 12:20:24.354653 | orchestrator | ok 2025-09-19 12:20:24.362600 | 2025-09-19 12:20:24.362728 | LOOP [fetch-output : Ensure local output dirs] 2025-09-19 12:20:24.846403 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/c7a2390133a342ccabfde485aae75074/work/logs" 2025-09-19 12:20:25.115711 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c7a2390133a342ccabfde485aae75074/work/artifacts" 2025-09-19 12:20:25.363692 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c7a2390133a342ccabfde485aae75074/work/docs" 2025-09-19 12:20:25.380547 | 2025-09-19 12:20:25.380667 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-19 12:20:26.286428 | orchestrator | changed: .d..t...... ./ 2025-09-19 12:20:26.286758 | orchestrator | changed: All items complete 2025-09-19 12:20:26.286818 | 2025-09-19 12:20:27.034304 | orchestrator | changed: .d..t...... ./ 2025-09-19 12:20:27.728193 | orchestrator | changed: .d..t...... ./ 2025-09-19 12:20:27.758553 | 2025-09-19 12:20:27.758688 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-19 12:20:28.261827 | orchestrator -> localhost | ok: Item: artifacts Runtime: 0:00:00.006542 2025-09-19 12:20:28.543683 | orchestrator -> localhost | ok: Item: docs Runtime: 0:00:00.012997 2025-09-19 12:20:28.567479 | 2025-09-19 12:20:28.567598 | PLAY RECAP 2025-09-19 12:20:28.567671 | orchestrator | ok: 4 changed: 3 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-19 12:20:28.567707 | 2025-09-19 12:20:28.690094 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-19 12:20:28.692634 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-19 12:20:29.449416 | 2025-09-19 12:20:29.449574 | PLAY [Base post] 2025-09-19 12:20:29.463962 | 2025-09-19 12:20:29.464119 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-19 12:20:30.428321 | orchestrator | changed 2025-09-19 12:20:30.438572 | 2025-09-19 12:20:30.438695 | PLAY RECAP 2025-09-19 12:20:30.438768 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-19 12:20:30.438873 | 2025-09-19 12:20:30.564810 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-19 12:20:30.567191 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-19 12:20:31.328415 | 2025-09-19 12:20:31.328571 | PLAY [Base post-logs] 2025-09-19 12:20:31.338922 | 2025-09-19 12:20:31.339421 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-19 12:20:31.801142 | localhost | changed 2025-09-19 12:20:31.811498 | 2025-09-19 12:20:31.811662 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-19 12:20:31.837551 | localhost | ok 2025-09-19 12:20:31.840754 | 2025-09-19 12:20:31.840866 | TASK [Set zuul-log-path fact] 2025-09-19 12:20:31.855950 | localhost | ok 2025-09-19 12:20:31.865157 | 2025-09-19 12:20:31.865269 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-19 12:20:31.891149 | localhost | ok 2025-09-19 12:20:31.897005 | 2025-09-19 12:20:31.897181 | TASK [upload-logs : Create log directories] 2025-09-19 12:20:32.392201 | localhost | changed 2025-09-19 12:20:32.395046 | 2025-09-19 12:20:32.395158 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-19 12:20:32.875298 | localhost -> localhost | ok: Runtime: 0:00:00.006732 2025-09-19 12:20:32.879478 | 2025-09-19 12:20:32.879603 | TASK [upload-logs : Upload logs to log server] 2025-09-19 12:20:33.434111 | localhost | Output suppressed because no_log was given 2025-09-19 12:20:33.437771 | 2025-09-19 12:20:33.437961 | LOOP [upload-logs : Compress console log and json output] 2025-09-19 12:20:33.494933 | localhost | skipping: Conditional result was False 2025-09-19 12:20:33.499933 | localhost | skipping: Conditional result was False 2025-09-19 12:20:33.512596 | 2025-09-19 12:20:33.512826 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-19 12:20:33.570200 | localhost | skipping: Conditional result was False 2025-09-19 12:20:33.571831 | 2025-09-19 12:20:33.574535 | localhost | skipping: Conditional result was False 2025-09-19 12:20:33.586644 | 2025-09-19 12:20:33.586910 | LOOP [upload-logs : Upload console log and json output]